00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3697 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3298 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.068 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.069 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.071 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.099 Fetching changes from the remote Git repository 00:00:00.101 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.127 Using shallow fetch with depth 1 00:00:00.127 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.127 > git --version # timeout=10 00:00:00.150 > git --version # 'git version 2.39.2' 00:00:00.150 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.165 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.165 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.860 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.870 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.879 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:04.879 > git config core.sparsecheckout # timeout=10 00:00:04.888 > git read-tree -mu HEAD # timeout=10 00:00:04.901 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:04.918 Commit message: "packer: Add bios builder" 00:00:04.918 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:04.989 [Pipeline] Start of Pipeline 00:00:04.998 [Pipeline] library 00:00:04.999 Loading library shm_lib@master 00:00:04.999 Library shm_lib@master is cached. Copying from home. 00:00:05.010 [Pipeline] node 00:00:05.023 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.024 [Pipeline] { 00:00:05.031 [Pipeline] catchError 00:00:05.032 [Pipeline] { 00:00:05.041 [Pipeline] wrap 00:00:05.047 [Pipeline] { 00:00:05.052 [Pipeline] stage 00:00:05.053 [Pipeline] { (Prologue) 00:00:05.211 [Pipeline] sh 00:00:05.489 + logger -p user.info -t JENKINS-CI 00:00:05.507 [Pipeline] echo 00:00:05.508 Node: GP11 00:00:05.515 [Pipeline] sh 00:00:05.817 [Pipeline] setCustomBuildProperty 00:00:05.830 [Pipeline] echo 00:00:05.831 Cleanup processes 00:00:05.835 [Pipeline] sh 00:00:06.116 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.116 1230423 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.129 [Pipeline] sh 00:00:06.414 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.414 ++ grep -v 'sudo pgrep' 00:00:06.414 ++ awk '{print $1}' 00:00:06.414 + sudo kill -9 00:00:06.414 + true 00:00:06.428 [Pipeline] cleanWs 00:00:06.438 [WS-CLEANUP] Deleting project workspace... 00:00:06.438 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.444 [WS-CLEANUP] done 00:00:06.447 [Pipeline] setCustomBuildProperty 00:00:06.457 [Pipeline] sh 00:00:06.740 + sudo git config --global --replace-all safe.directory '*' 00:00:06.826 [Pipeline] httpRequest 00:00:06.861 [Pipeline] echo 00:00:06.863 Sorcerer 10.211.164.101 is alive 00:00:06.872 [Pipeline] httpRequest 00:00:06.877 HttpMethod: GET 00:00:06.878 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.878 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.906 Response Code: HTTP/1.1 200 OK 00:00:06.907 Success: Status code 200 is in the accepted range: 200,404 00:00:06.907 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:27.227 [Pipeline] sh 00:00:27.512 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:27.525 [Pipeline] httpRequest 00:00:27.548 [Pipeline] echo 00:00:27.550 Sorcerer 10.211.164.101 is alive 00:00:27.556 [Pipeline] httpRequest 00:00:27.560 HttpMethod: GET 00:00:27.561 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:27.561 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:27.588 Response Code: HTTP/1.1 200 OK 00:00:27.589 Success: Status code 200 is in the accepted range: 200,404 00:00:27.590 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:19.672 [Pipeline] sh 00:01:19.975 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:23.285 [Pipeline] sh 00:01:23.571 + git -C spdk log --oneline -n5 00:01:23.571 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:23.571 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:23.571 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:23.571 d005e023b raid: fix empty slot not updated in sb after resize 00:01:23.571 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:23.590 [Pipeline] withCredentials 00:01:23.602 > git --version # timeout=10 00:01:23.615 > git --version # 'git version 2.39.2' 00:01:23.634 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:23.636 [Pipeline] { 00:01:23.645 [Pipeline] retry 00:01:23.648 [Pipeline] { 00:01:23.665 [Pipeline] sh 00:01:23.966 + git ls-remote http://dpdk.org/git/dpdk main 00:01:25.365 [Pipeline] } 00:01:25.385 [Pipeline] // retry 00:01:25.390 [Pipeline] } 00:01:25.410 [Pipeline] // withCredentials 00:01:25.420 [Pipeline] httpRequest 00:01:25.443 [Pipeline] echo 00:01:25.445 Sorcerer 10.211.164.101 is alive 00:01:25.455 [Pipeline] httpRequest 00:01:25.461 HttpMethod: GET 00:01:25.461 URL: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:25.462 Sending request to url: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:25.465 Response Code: HTTP/1.1 200 OK 00:01:25.465 Success: Status code 200 is in the accepted range: 200,404 00:01:25.466 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:29.695 [Pipeline] sh 00:01:29.982 + tar --no-same-owner -xf dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:31.903 [Pipeline] sh 00:01:32.187 + git -C dpdk log --oneline -n5 00:01:32.187 82c47f005b version: 24.07-rc3 00:01:32.187 d9d1be537e doc: remove reference to mbuf pkt field 00:01:32.187 52c7393a03 doc: set required MinGW version in Windows guide 00:01:32.187 92439dc9ac dts: improve starting and stopping interactive shells 00:01:32.187 2b648cd4e4 dts: add context manager for interactive shells 00:01:32.197 [Pipeline] } 00:01:32.216 [Pipeline] // stage 00:01:32.224 [Pipeline] stage 00:01:32.226 [Pipeline] { (Prepare) 00:01:32.248 [Pipeline] writeFile 00:01:32.266 [Pipeline] sh 00:01:32.549 + logger -p user.info -t JENKINS-CI 00:01:32.563 [Pipeline] sh 00:01:32.849 + logger -p user.info -t JENKINS-CI 00:01:32.863 [Pipeline] sh 00:01:33.150 + cat autorun-spdk.conf 00:01:33.150 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.150 SPDK_TEST_NVMF=1 00:01:33.150 SPDK_TEST_NVME_CLI=1 00:01:33.150 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.150 SPDK_TEST_NVMF_NICS=e810 00:01:33.150 SPDK_TEST_VFIOUSER=1 00:01:33.150 SPDK_RUN_UBSAN=1 00:01:33.150 NET_TYPE=phy 00:01:33.150 SPDK_TEST_NATIVE_DPDK=main 00:01:33.150 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:33.158 RUN_NIGHTLY=1 00:01:33.163 [Pipeline] readFile 00:01:33.190 [Pipeline] withEnv 00:01:33.192 [Pipeline] { 00:01:33.207 [Pipeline] sh 00:01:33.494 + set -ex 00:01:33.495 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:33.495 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:33.495 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.495 ++ SPDK_TEST_NVMF=1 00:01:33.495 ++ SPDK_TEST_NVME_CLI=1 00:01:33.495 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.495 ++ SPDK_TEST_NVMF_NICS=e810 00:01:33.495 ++ SPDK_TEST_VFIOUSER=1 00:01:33.495 ++ SPDK_RUN_UBSAN=1 00:01:33.495 ++ NET_TYPE=phy 00:01:33.495 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:33.495 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:33.495 ++ RUN_NIGHTLY=1 00:01:33.495 + case $SPDK_TEST_NVMF_NICS in 00:01:33.495 + DRIVERS=ice 00:01:33.495 + [[ tcp == \r\d\m\a ]] 00:01:33.495 + [[ -n ice ]] 00:01:33.495 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:33.495 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:33.495 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:33.495 rmmod: ERROR: Module irdma is not currently loaded 00:01:33.495 rmmod: ERROR: Module i40iw is not currently loaded 00:01:33.495 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:33.495 + true 00:01:33.495 + for D in $DRIVERS 00:01:33.495 + sudo modprobe ice 00:01:33.495 + exit 0 00:01:33.504 [Pipeline] } 00:01:33.526 [Pipeline] // withEnv 00:01:33.532 [Pipeline] } 00:01:33.551 [Pipeline] // stage 00:01:33.562 [Pipeline] catchError 00:01:33.564 [Pipeline] { 00:01:33.581 [Pipeline] timeout 00:01:33.582 Timeout set to expire in 50 min 00:01:33.584 [Pipeline] { 00:01:33.602 [Pipeline] stage 00:01:33.604 [Pipeline] { (Tests) 00:01:33.622 [Pipeline] sh 00:01:33.910 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:33.911 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:33.911 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:33.911 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:33.911 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.911 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:33.911 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:33.911 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:33.911 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:33.911 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:33.911 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:33.911 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:33.911 + source /etc/os-release 00:01:33.911 ++ NAME='Fedora Linux' 00:01:33.911 ++ VERSION='38 (Cloud Edition)' 00:01:33.911 ++ ID=fedora 00:01:33.911 ++ VERSION_ID=38 00:01:33.911 ++ VERSION_CODENAME= 00:01:33.911 ++ PLATFORM_ID=platform:f38 00:01:33.911 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:33.911 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:33.911 ++ LOGO=fedora-logo-icon 00:01:33.911 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:33.911 ++ HOME_URL=https://fedoraproject.org/ 00:01:33.911 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:33.911 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:33.911 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:33.911 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:33.911 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:33.911 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:33.911 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:33.911 ++ SUPPORT_END=2024-05-14 00:01:33.911 ++ VARIANT='Cloud Edition' 00:01:33.911 ++ VARIANT_ID=cloud 00:01:33.911 + uname -a 00:01:33.911 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:33.911 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:34.847 Hugepages 00:01:34.847 node hugesize free / total 00:01:34.847 node0 1048576kB 0 / 0 00:01:34.847 node0 2048kB 0 / 0 00:01:34.847 node1 1048576kB 0 / 0 00:01:34.847 node1 2048kB 0 / 0 00:01:34.847 00:01:34.847 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:34.847 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:34.847 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:34.847 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:34.847 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:34.847 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:34.847 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:34.847 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:34.847 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:34.847 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:34.847 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:34.847 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:34.847 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:34.847 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:34.847 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:34.847 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:34.847 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:34.847 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:34.847 + rm -f /tmp/spdk-ld-path 00:01:34.847 + source autorun-spdk.conf 00:01:34.847 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.847 ++ SPDK_TEST_NVMF=1 00:01:34.847 ++ SPDK_TEST_NVME_CLI=1 00:01:34.848 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.848 ++ SPDK_TEST_NVMF_NICS=e810 00:01:34.848 ++ SPDK_TEST_VFIOUSER=1 00:01:34.848 ++ SPDK_RUN_UBSAN=1 00:01:34.848 ++ NET_TYPE=phy 00:01:34.848 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:34.848 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:34.848 ++ RUN_NIGHTLY=1 00:01:34.848 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:34.848 + [[ -n '' ]] 00:01:34.848 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:34.848 + for M in /var/spdk/build-*-manifest.txt 00:01:34.848 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:34.848 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:34.848 + for M in /var/spdk/build-*-manifest.txt 00:01:34.848 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:34.848 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:34.848 ++ uname 00:01:34.848 + [[ Linux == \L\i\n\u\x ]] 00:01:34.848 + sudo dmesg -T 00:01:35.105 + sudo dmesg --clear 00:01:35.105 + dmesg_pid=1231130 00:01:35.105 + [[ Fedora Linux == FreeBSD ]] 00:01:35.105 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:35.105 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:35.105 + sudo dmesg -Tw 00:01:35.105 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:35.105 + [[ -x /usr/src/fio-static/fio ]] 00:01:35.105 + export FIO_BIN=/usr/src/fio-static/fio 00:01:35.105 + FIO_BIN=/usr/src/fio-static/fio 00:01:35.105 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:35.105 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:35.105 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:35.105 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:35.105 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:35.105 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:35.105 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:35.105 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:35.105 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.105 Test configuration: 00:01:35.105 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.105 SPDK_TEST_NVMF=1 00:01:35.105 SPDK_TEST_NVME_CLI=1 00:01:35.105 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.105 SPDK_TEST_NVMF_NICS=e810 00:01:35.105 SPDK_TEST_VFIOUSER=1 00:01:35.105 SPDK_RUN_UBSAN=1 00:01:35.105 NET_TYPE=phy 00:01:35.105 SPDK_TEST_NATIVE_DPDK=main 00:01:35.105 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.105 RUN_NIGHTLY=1 18:02:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:35.105 18:02:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:35.105 18:02:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:35.106 18:02:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:35.106 18:02:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.106 18:02:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.106 18:02:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.106 18:02:01 -- paths/export.sh@5 -- $ export PATH 00:01:35.106 18:02:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.106 18:02:01 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:35.106 18:02:01 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:35.106 18:02:01 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1722009721.XXXXXX 00:01:35.106 18:02:01 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1722009721.qu4kyd 00:01:35.106 18:02:01 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:35.106 18:02:01 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:01:35.106 18:02:01 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.106 18:02:01 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:35.106 18:02:01 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:35.106 18:02:01 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:35.106 18:02:01 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:35.106 18:02:01 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:35.106 18:02:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.106 18:02:01 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:35.106 18:02:01 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:35.106 18:02:01 -- pm/common@17 -- $ local monitor 00:01:35.106 18:02:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.106 18:02:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.106 18:02:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.106 18:02:01 -- pm/common@21 -- $ date +%s 00:01:35.106 18:02:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.106 18:02:01 -- pm/common@21 -- $ date +%s 00:01:35.106 18:02:01 -- pm/common@25 -- $ sleep 1 00:01:35.106 18:02:01 -- pm/common@21 -- $ date +%s 00:01:35.106 18:02:01 -- pm/common@21 -- $ date +%s 00:01:35.106 18:02:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722009721 00:01:35.106 18:02:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722009721 00:01:35.106 18:02:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722009721 00:01:35.106 18:02:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722009721 00:01:35.106 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722009721_collect-vmstat.pm.log 00:01:35.106 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722009721_collect-cpu-load.pm.log 00:01:35.106 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722009721_collect-cpu-temp.pm.log 00:01:35.106 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722009721_collect-bmc-pm.bmc.pm.log 00:01:36.045 18:02:02 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:36.045 18:02:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:36.045 18:02:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:36.045 18:02:02 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.045 18:02:02 -- spdk/autobuild.sh@16 -- $ date -u 00:01:36.045 Fri Jul 26 04:02:02 PM UTC 2024 00:01:36.045 18:02:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:36.045 v24.09-pre-321-g704257090 00:01:36.045 18:02:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:36.045 18:02:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:36.045 18:02:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:36.045 18:02:02 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:36.045 18:02:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:36.045 18:02:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.045 ************************************ 00:01:36.045 START TEST ubsan 00:01:36.045 ************************************ 00:01:36.045 18:02:02 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:36.045 using ubsan 00:01:36.045 00:01:36.045 real 0m0.000s 00:01:36.045 user 0m0.000s 00:01:36.045 sys 0m0.000s 00:01:36.045 18:02:02 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:36.045 18:02:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:36.045 ************************************ 00:01:36.045 END TEST ubsan 00:01:36.045 ************************************ 00:01:36.045 18:02:02 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:36.045 18:02:02 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:36.045 18:02:02 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:36.045 18:02:02 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:36.045 18:02:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:36.045 18:02:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.045 ************************************ 00:01:36.045 START TEST build_native_dpdk 00:01:36.045 ************************************ 00:01:36.045 18:02:02 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:36.045 18:02:02 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:36.045 18:02:02 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:36.045 18:02:02 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:36.045 18:02:02 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:36.045 18:02:02 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:36.045 18:02:02 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:36.045 18:02:02 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:36.045 18:02:02 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:36.045 18:02:02 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:36.304 18:02:02 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:36.304 18:02:02 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:36.304 18:02:02 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:36.304 18:02:02 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:36.304 18:02:02 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:36.304 18:02:02 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:36.304 18:02:02 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:36.304 18:02:02 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:36.304 18:02:02 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:36.304 18:02:02 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:36.305 82c47f005b version: 24.07-rc3 00:01:36.305 d9d1be537e doc: remove reference to mbuf pkt field 00:01:36.305 52c7393a03 doc: set required MinGW version in Windows guide 00:01:36.305 92439dc9ac dts: improve starting and stopping interactive shells 00:01:36.305 2b648cd4e4 dts: add context manager for interactive shells 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc3 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc3 21.11.0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 21.11.0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:36.305 patching file config/rte_config.h 00:01:36.305 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:36.305 18:02:02 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.07.0-rc3 24.07.0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 24.07.0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 07 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=7 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 07 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=7 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@362 -- $ decimal rc3 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d=rc3 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ rc3 =~ ^[0-9]+$ ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^0x ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^[a-f0-9]+$ ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@363 -- $ decimal '' 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d= 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ '' =~ ^[0-9]+$ ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^0x ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^[a-f0-9]+$ ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@367 -- $ [[ 24 7 0 0 == \2\4\ \7\ \0\ \0 ]] 00:01:36.305 18:02:02 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:36.306 18:02:02 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:36.306 18:02:02 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:36.306 18:02:02 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:36.306 18:02:02 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:36.306 18:02:02 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:40.515 The Meson build system 00:01:40.515 Version: 1.3.1 00:01:40.515 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:40.515 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:40.515 Build type: native build 00:01:40.515 Program cat found: YES (/usr/bin/cat) 00:01:40.515 Project name: DPDK 00:01:40.515 Project version: 24.07.0-rc3 00:01:40.515 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:40.515 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:40.515 Host machine cpu family: x86_64 00:01:40.515 Host machine cpu: x86_64 00:01:40.515 Message: ## Building in Developer Mode ## 00:01:40.515 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:40.515 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:40.515 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:40.515 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:40.515 Program cat found: YES (/usr/bin/cat) 00:01:40.515 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:40.515 Compiler for C supports arguments -march=native: YES 00:01:40.515 Checking for size of "void *" : 8 00:01:40.515 Checking for size of "void *" : 8 (cached) 00:01:40.515 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:40.515 Library m found: YES 00:01:40.515 Library numa found: YES 00:01:40.515 Has header "numaif.h" : YES 00:01:40.515 Library fdt found: NO 00:01:40.515 Library execinfo found: NO 00:01:40.515 Has header "execinfo.h" : YES 00:01:40.515 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:40.515 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:40.515 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:40.515 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:40.515 Run-time dependency openssl found: YES 3.0.9 00:01:40.515 Run-time dependency libpcap found: YES 1.10.4 00:01:40.515 Has header "pcap.h" with dependency libpcap: YES 00:01:40.515 Compiler for C supports arguments -Wcast-qual: YES 00:01:40.515 Compiler for C supports arguments -Wdeprecated: YES 00:01:40.515 Compiler for C supports arguments -Wformat: YES 00:01:40.515 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:40.515 Compiler for C supports arguments -Wformat-security: NO 00:01:40.515 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.515 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:40.515 Compiler for C supports arguments -Wnested-externs: YES 00:01:40.515 Compiler for C supports arguments -Wold-style-definition: YES 00:01:40.515 Compiler for C supports arguments -Wpointer-arith: YES 00:01:40.515 Compiler for C supports arguments -Wsign-compare: YES 00:01:40.515 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:40.515 Compiler for C supports arguments -Wundef: YES 00:01:40.515 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.515 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:40.515 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:40.515 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.515 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:40.515 Program objdump found: YES (/usr/bin/objdump) 00:01:40.515 Compiler for C supports arguments -mavx512f: YES 00:01:40.515 Checking if "AVX512 checking" compiles: YES 00:01:40.515 Fetching value of define "__SSE4_2__" : 1 00:01:40.515 Fetching value of define "__AES__" : 1 00:01:40.515 Fetching value of define "__AVX__" : 1 00:01:40.515 Fetching value of define "__AVX2__" : (undefined) 00:01:40.515 Fetching value of define "__AVX512BW__" : (undefined) 00:01:40.515 Fetching value of define "__AVX512CD__" : (undefined) 00:01:40.515 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:40.515 Fetching value of define "__AVX512F__" : (undefined) 00:01:40.515 Fetching value of define "__AVX512VL__" : (undefined) 00:01:40.515 Fetching value of define "__PCLMUL__" : 1 00:01:40.515 Fetching value of define "__RDRND__" : 1 00:01:40.515 Fetching value of define "__RDSEED__" : (undefined) 00:01:40.515 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:40.515 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:40.515 Message: lib/log: Defining dependency "log" 00:01:40.515 Message: lib/kvargs: Defining dependency "kvargs" 00:01:40.515 Message: lib/argparse: Defining dependency "argparse" 00:01:40.515 Message: lib/telemetry: Defining dependency "telemetry" 00:01:40.515 Checking for function "getentropy" : NO 00:01:40.515 Message: lib/eal: Defining dependency "eal" 00:01:40.515 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:40.515 Message: lib/ring: Defining dependency "ring" 00:01:40.515 Message: lib/rcu: Defining dependency "rcu" 00:01:40.515 Message: lib/mempool: Defining dependency "mempool" 00:01:40.515 Message: lib/mbuf: Defining dependency "mbuf" 00:01:40.515 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:40.515 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:40.515 Compiler for C supports arguments -mpclmul: YES 00:01:40.515 Compiler for C supports arguments -maes: YES 00:01:40.515 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:40.515 Compiler for C supports arguments -mavx512bw: YES 00:01:40.515 Compiler for C supports arguments -mavx512dq: YES 00:01:40.515 Compiler for C supports arguments -mavx512vl: YES 00:01:40.515 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:40.515 Compiler for C supports arguments -mavx2: YES 00:01:40.515 Compiler for C supports arguments -mavx: YES 00:01:40.515 Message: lib/net: Defining dependency "net" 00:01:40.515 Message: lib/meter: Defining dependency "meter" 00:01:40.515 Message: lib/ethdev: Defining dependency "ethdev" 00:01:40.515 Message: lib/pci: Defining dependency "pci" 00:01:40.515 Message: lib/cmdline: Defining dependency "cmdline" 00:01:40.515 Message: lib/metrics: Defining dependency "metrics" 00:01:40.515 Message: lib/hash: Defining dependency "hash" 00:01:40.515 Message: lib/timer: Defining dependency "timer" 00:01:40.515 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:40.515 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:40.515 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:40.515 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:40.515 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:40.515 Message: lib/acl: Defining dependency "acl" 00:01:40.515 Message: lib/bbdev: Defining dependency "bbdev" 00:01:40.515 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:40.515 Run-time dependency libelf found: YES 0.190 00:01:40.515 Message: lib/bpf: Defining dependency "bpf" 00:01:40.515 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:40.515 Message: lib/compressdev: Defining dependency "compressdev" 00:01:40.515 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:40.515 Message: lib/distributor: Defining dependency "distributor" 00:01:40.515 Message: lib/dmadev: Defining dependency "dmadev" 00:01:40.515 Message: lib/efd: Defining dependency "efd" 00:01:40.515 Message: lib/eventdev: Defining dependency "eventdev" 00:01:40.515 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:40.515 Message: lib/gpudev: Defining dependency "gpudev" 00:01:40.515 Message: lib/gro: Defining dependency "gro" 00:01:40.515 Message: lib/gso: Defining dependency "gso" 00:01:40.515 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:40.515 Message: lib/jobstats: Defining dependency "jobstats" 00:01:40.515 Message: lib/latencystats: Defining dependency "latencystats" 00:01:40.516 Message: lib/lpm: Defining dependency "lpm" 00:01:40.516 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:40.516 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:40.516 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:40.516 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:40.516 Message: lib/member: Defining dependency "member" 00:01:40.516 Message: lib/pcapng: Defining dependency "pcapng" 00:01:40.516 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:40.516 Message: lib/power: Defining dependency "power" 00:01:40.516 Message: lib/rawdev: Defining dependency "rawdev" 00:01:40.516 Message: lib/regexdev: Defining dependency "regexdev" 00:01:40.516 Message: lib/mldev: Defining dependency "mldev" 00:01:40.516 Message: lib/rib: Defining dependency "rib" 00:01:40.516 Message: lib/reorder: Defining dependency "reorder" 00:01:40.516 Message: lib/sched: Defining dependency "sched" 00:01:40.516 Message: lib/security: Defining dependency "security" 00:01:40.516 Message: lib/stack: Defining dependency "stack" 00:01:40.516 Has header "linux/userfaultfd.h" : YES 00:01:40.516 Has header "linux/vduse.h" : YES 00:01:40.516 Message: lib/vhost: Defining dependency "vhost" 00:01:40.516 Message: lib/ipsec: Defining dependency "ipsec" 00:01:40.516 Message: lib/pdcp: Defining dependency "pdcp" 00:01:40.516 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:40.516 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:40.516 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:40.516 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:40.516 Message: lib/fib: Defining dependency "fib" 00:01:40.516 Message: lib/port: Defining dependency "port" 00:01:40.516 Message: lib/pdump: Defining dependency "pdump" 00:01:40.516 Message: lib/table: Defining dependency "table" 00:01:40.516 Message: lib/pipeline: Defining dependency "pipeline" 00:01:40.516 Message: lib/graph: Defining dependency "graph" 00:01:40.516 Message: lib/node: Defining dependency "node" 00:01:41.902 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:41.902 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:41.902 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:41.902 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:41.902 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:41.902 Compiler for C supports arguments -Wno-unused-value: YES 00:01:41.902 Compiler for C supports arguments -Wno-format: YES 00:01:41.902 Compiler for C supports arguments -Wno-format-security: YES 00:01:41.902 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:41.902 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:41.902 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:41.902 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:41.902 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:41.902 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:41.902 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:41.902 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:41.902 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:41.902 Has header "sys/epoll.h" : YES 00:01:41.902 Program doxygen found: YES (/usr/bin/doxygen) 00:01:41.902 Configuring doxy-api-html.conf using configuration 00:01:41.902 Configuring doxy-api-man.conf using configuration 00:01:41.902 Program mandb found: YES (/usr/bin/mandb) 00:01:41.902 Program sphinx-build found: NO 00:01:41.902 Configuring rte_build_config.h using configuration 00:01:41.902 Message: 00:01:41.902 ================= 00:01:41.902 Applications Enabled 00:01:41.902 ================= 00:01:41.902 00:01:41.902 apps: 00:01:41.902 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:41.902 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:41.902 test-pmd, test-regex, test-sad, test-security-perf, 00:01:41.902 00:01:41.902 Message: 00:01:41.902 ================= 00:01:41.902 Libraries Enabled 00:01:41.902 ================= 00:01:41.902 00:01:41.902 libs: 00:01:41.902 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:41.902 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:41.902 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:41.902 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:41.902 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:41.902 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:41.902 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:41.902 graph, node, 00:01:41.902 00:01:41.902 Message: 00:01:41.902 =============== 00:01:41.902 Drivers Enabled 00:01:41.902 =============== 00:01:41.902 00:01:41.902 common: 00:01:41.902 00:01:41.902 bus: 00:01:41.902 pci, vdev, 00:01:41.902 mempool: 00:01:41.902 ring, 00:01:41.902 dma: 00:01:41.902 00:01:41.902 net: 00:01:41.902 i40e, 00:01:41.902 raw: 00:01:41.902 00:01:41.902 crypto: 00:01:41.902 00:01:41.902 compress: 00:01:41.902 00:01:41.902 regex: 00:01:41.902 00:01:41.902 ml: 00:01:41.902 00:01:41.902 vdpa: 00:01:41.902 00:01:41.902 event: 00:01:41.902 00:01:41.902 baseband: 00:01:41.902 00:01:41.902 gpu: 00:01:41.902 00:01:41.902 00:01:41.902 Message: 00:01:41.902 ================= 00:01:41.902 Content Skipped 00:01:41.902 ================= 00:01:41.902 00:01:41.902 apps: 00:01:41.902 00:01:41.902 libs: 00:01:41.902 00:01:41.902 drivers: 00:01:41.902 common/cpt: not in enabled drivers build config 00:01:41.902 common/dpaax: not in enabled drivers build config 00:01:41.902 common/iavf: not in enabled drivers build config 00:01:41.902 common/idpf: not in enabled drivers build config 00:01:41.902 common/ionic: not in enabled drivers build config 00:01:41.902 common/mvep: not in enabled drivers build config 00:01:41.902 common/octeontx: not in enabled drivers build config 00:01:41.902 bus/auxiliary: not in enabled drivers build config 00:01:41.902 bus/cdx: not in enabled drivers build config 00:01:41.902 bus/dpaa: not in enabled drivers build config 00:01:41.902 bus/fslmc: not in enabled drivers build config 00:01:41.902 bus/ifpga: not in enabled drivers build config 00:01:41.902 bus/platform: not in enabled drivers build config 00:01:41.902 bus/uacce: not in enabled drivers build config 00:01:41.902 bus/vmbus: not in enabled drivers build config 00:01:41.902 common/cnxk: not in enabled drivers build config 00:01:41.902 common/mlx5: not in enabled drivers build config 00:01:41.902 common/nfp: not in enabled drivers build config 00:01:41.902 common/nitrox: not in enabled drivers build config 00:01:41.902 common/qat: not in enabled drivers build config 00:01:41.902 common/sfc_efx: not in enabled drivers build config 00:01:41.902 mempool/bucket: not in enabled drivers build config 00:01:41.902 mempool/cnxk: not in enabled drivers build config 00:01:41.902 mempool/dpaa: not in enabled drivers build config 00:01:41.902 mempool/dpaa2: not in enabled drivers build config 00:01:41.902 mempool/octeontx: not in enabled drivers build config 00:01:41.902 mempool/stack: not in enabled drivers build config 00:01:41.902 dma/cnxk: not in enabled drivers build config 00:01:41.902 dma/dpaa: not in enabled drivers build config 00:01:41.902 dma/dpaa2: not in enabled drivers build config 00:01:41.902 dma/hisilicon: not in enabled drivers build config 00:01:41.902 dma/idxd: not in enabled drivers build config 00:01:41.902 dma/ioat: not in enabled drivers build config 00:01:41.902 dma/odm: not in enabled drivers build config 00:01:41.902 dma/skeleton: not in enabled drivers build config 00:01:41.902 net/af_packet: not in enabled drivers build config 00:01:41.902 net/af_xdp: not in enabled drivers build config 00:01:41.902 net/ark: not in enabled drivers build config 00:01:41.902 net/atlantic: not in enabled drivers build config 00:01:41.902 net/avp: not in enabled drivers build config 00:01:41.902 net/axgbe: not in enabled drivers build config 00:01:41.902 net/bnx2x: not in enabled drivers build config 00:01:41.902 net/bnxt: not in enabled drivers build config 00:01:41.902 net/bonding: not in enabled drivers build config 00:01:41.902 net/cnxk: not in enabled drivers build config 00:01:41.902 net/cpfl: not in enabled drivers build config 00:01:41.902 net/cxgbe: not in enabled drivers build config 00:01:41.902 net/dpaa: not in enabled drivers build config 00:01:41.902 net/dpaa2: not in enabled drivers build config 00:01:41.902 net/e1000: not in enabled drivers build config 00:01:41.902 net/ena: not in enabled drivers build config 00:01:41.902 net/enetc: not in enabled drivers build config 00:01:41.902 net/enetfec: not in enabled drivers build config 00:01:41.902 net/enic: not in enabled drivers build config 00:01:41.902 net/failsafe: not in enabled drivers build config 00:01:41.902 net/fm10k: not in enabled drivers build config 00:01:41.902 net/gve: not in enabled drivers build config 00:01:41.902 net/hinic: not in enabled drivers build config 00:01:41.902 net/hns3: not in enabled drivers build config 00:01:41.902 net/iavf: not in enabled drivers build config 00:01:41.902 net/ice: not in enabled drivers build config 00:01:41.902 net/idpf: not in enabled drivers build config 00:01:41.902 net/igc: not in enabled drivers build config 00:01:41.902 net/ionic: not in enabled drivers build config 00:01:41.902 net/ipn3ke: not in enabled drivers build config 00:01:41.902 net/ixgbe: not in enabled drivers build config 00:01:41.902 net/mana: not in enabled drivers build config 00:01:41.902 net/memif: not in enabled drivers build config 00:01:41.902 net/mlx4: not in enabled drivers build config 00:01:41.902 net/mlx5: not in enabled drivers build config 00:01:41.902 net/mvneta: not in enabled drivers build config 00:01:41.903 net/mvpp2: not in enabled drivers build config 00:01:41.903 net/netvsc: not in enabled drivers build config 00:01:41.903 net/nfb: not in enabled drivers build config 00:01:41.903 net/nfp: not in enabled drivers build config 00:01:41.903 net/ngbe: not in enabled drivers build config 00:01:41.903 net/ntnic: not in enabled drivers build config 00:01:41.903 net/null: not in enabled drivers build config 00:01:41.903 net/octeontx: not in enabled drivers build config 00:01:41.903 net/octeon_ep: not in enabled drivers build config 00:01:41.903 net/pcap: not in enabled drivers build config 00:01:41.903 net/pfe: not in enabled drivers build config 00:01:41.903 net/qede: not in enabled drivers build config 00:01:41.903 net/ring: not in enabled drivers build config 00:01:41.903 net/sfc: not in enabled drivers build config 00:01:41.903 net/softnic: not in enabled drivers build config 00:01:41.903 net/tap: not in enabled drivers build config 00:01:41.903 net/thunderx: not in enabled drivers build config 00:01:41.903 net/txgbe: not in enabled drivers build config 00:01:41.903 net/vdev_netvsc: not in enabled drivers build config 00:01:41.903 net/vhost: not in enabled drivers build config 00:01:41.903 net/virtio: not in enabled drivers build config 00:01:41.903 net/vmxnet3: not in enabled drivers build config 00:01:41.903 raw/cnxk_bphy: not in enabled drivers build config 00:01:41.903 raw/cnxk_gpio: not in enabled drivers build config 00:01:41.903 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:41.903 raw/ifpga: not in enabled drivers build config 00:01:41.903 raw/ntb: not in enabled drivers build config 00:01:41.903 raw/skeleton: not in enabled drivers build config 00:01:41.903 crypto/armv8: not in enabled drivers build config 00:01:41.903 crypto/bcmfs: not in enabled drivers build config 00:01:41.903 crypto/caam_jr: not in enabled drivers build config 00:01:41.903 crypto/ccp: not in enabled drivers build config 00:01:41.903 crypto/cnxk: not in enabled drivers build config 00:01:41.903 crypto/dpaa_sec: not in enabled drivers build config 00:01:41.903 crypto/dpaa2_sec: not in enabled drivers build config 00:01:41.903 crypto/ionic: not in enabled drivers build config 00:01:41.903 crypto/ipsec_mb: not in enabled drivers build config 00:01:41.903 crypto/mlx5: not in enabled drivers build config 00:01:41.903 crypto/mvsam: not in enabled drivers build config 00:01:41.903 crypto/nitrox: not in enabled drivers build config 00:01:41.903 crypto/null: not in enabled drivers build config 00:01:41.903 crypto/octeontx: not in enabled drivers build config 00:01:41.903 crypto/openssl: not in enabled drivers build config 00:01:41.903 crypto/scheduler: not in enabled drivers build config 00:01:41.903 crypto/uadk: not in enabled drivers build config 00:01:41.903 crypto/virtio: not in enabled drivers build config 00:01:41.903 compress/isal: not in enabled drivers build config 00:01:41.903 compress/mlx5: not in enabled drivers build config 00:01:41.903 compress/nitrox: not in enabled drivers build config 00:01:41.903 compress/octeontx: not in enabled drivers build config 00:01:41.903 compress/uadk: not in enabled drivers build config 00:01:41.903 compress/zlib: not in enabled drivers build config 00:01:41.903 regex/mlx5: not in enabled drivers build config 00:01:41.903 regex/cn9k: not in enabled drivers build config 00:01:41.903 ml/cnxk: not in enabled drivers build config 00:01:41.903 vdpa/ifc: not in enabled drivers build config 00:01:41.903 vdpa/mlx5: not in enabled drivers build config 00:01:41.903 vdpa/nfp: not in enabled drivers build config 00:01:41.903 vdpa/sfc: not in enabled drivers build config 00:01:41.903 event/cnxk: not in enabled drivers build config 00:01:41.903 event/dlb2: not in enabled drivers build config 00:01:41.903 event/dpaa: not in enabled drivers build config 00:01:41.903 event/dpaa2: not in enabled drivers build config 00:01:41.903 event/dsw: not in enabled drivers build config 00:01:41.903 event/opdl: not in enabled drivers build config 00:01:41.903 event/skeleton: not in enabled drivers build config 00:01:41.903 event/sw: not in enabled drivers build config 00:01:41.903 event/octeontx: not in enabled drivers build config 00:01:41.903 baseband/acc: not in enabled drivers build config 00:01:41.903 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:41.903 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:41.903 baseband/la12xx: not in enabled drivers build config 00:01:41.903 baseband/null: not in enabled drivers build config 00:01:41.903 baseband/turbo_sw: not in enabled drivers build config 00:01:41.903 gpu/cuda: not in enabled drivers build config 00:01:41.903 00:01:41.903 00:01:41.903 Build targets in project: 224 00:01:41.903 00:01:41.903 DPDK 24.07.0-rc3 00:01:41.903 00:01:41.903 User defined options 00:01:41.903 libdir : lib 00:01:41.903 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.903 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:41.903 c_link_args : 00:01:41.903 enable_docs : false 00:01:41.903 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:41.903 enable_kmods : false 00:01:41.903 machine : native 00:01:41.903 tests : false 00:01:41.903 00:01:41.903 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:41.903 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:41.903 18:02:07 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:41.903 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:41.903 [1/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:41.903 [2/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:41.903 [3/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:41.903 [4/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:41.903 [5/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:41.903 [6/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:41.903 [7/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:41.903 [8/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:41.903 [9/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:41.903 [10/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:41.903 [11/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:42.164 [12/723] Linking static target lib/librte_kvargs.a 00:01:42.164 [13/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:42.164 [14/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:42.164 [15/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:42.164 [16/723] Linking static target lib/librte_log.a 00:01:42.430 [17/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:42.430 [18/723] Linking static target lib/librte_argparse.a 00:01:42.430 [19/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.689 [20/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.949 [21/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.949 [22/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:42.949 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:42.949 [24/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:42.949 [25/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:42.949 [26/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:42.949 [27/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:42.949 [28/723] Linking target lib/librte_log.so.24.2 00:01:42.949 [29/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:42.949 [30/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:42.949 [31/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:42.949 [32/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:42.949 [33/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:42.949 [34/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:42.949 [35/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:42.949 [36/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:42.949 [37/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:42.949 [38/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:43.213 [39/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:43.214 [40/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:43.214 [41/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:43.214 [42/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:43.214 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:43.214 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:43.214 [45/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:43.214 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:43.214 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:43.214 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:43.214 [49/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:43.214 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:43.214 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:43.214 [52/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:43.214 [53/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:43.214 [54/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:43.214 [55/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:43.214 [56/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:43.214 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:43.214 [58/723] Linking target lib/librte_kvargs.so.24.2 00:01:43.214 [59/723] Linking target lib/librte_argparse.so.24.2 00:01:43.214 [60/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:43.214 [61/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:43.214 [62/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:43.473 [63/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:43.473 [64/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:43.473 [65/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:43.735 [66/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:43.735 [67/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:43.735 [68/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:43.735 [69/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:43.735 [70/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:43.735 [71/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:43.996 [72/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:43.996 [73/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:43.996 [74/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:43.996 [75/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:43.996 [76/723] Linking static target lib/librte_pci.a 00:01:43.996 [77/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:43.996 [78/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:44.265 [79/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:44.265 [80/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:44.265 [81/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:44.265 [82/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:44.265 [83/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:44.265 [84/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:44.265 [85/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:44.265 [86/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:44.265 [87/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:44.265 [88/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:44.265 [89/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:44.265 [90/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:44.265 [91/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:44.265 [92/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:44.265 [93/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:44.265 [94/723] Linking static target lib/librte_ring.a 00:01:44.526 [95/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:44.526 [96/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:44.526 [97/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.526 [98/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:44.526 [99/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:44.526 [100/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:44.526 [101/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:44.526 [102/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:44.526 [103/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:44.526 [104/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:44.526 [105/723] Linking static target lib/librte_meter.a 00:01:44.526 [106/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:44.526 [107/723] Linking static target lib/librte_telemetry.a 00:01:44.526 [108/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:44.526 [109/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:44.526 [110/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:44.526 [111/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:44.791 [112/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:44.791 [113/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:44.791 [114/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:44.791 [115/723] Linking static target lib/librte_net.a 00:01:44.791 [116/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:44.791 [117/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:44.791 [118/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.791 [119/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:45.054 [120/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.054 [121/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:45.054 [122/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:45.055 [123/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:45.055 [124/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:45.055 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:45.055 [126/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.055 [127/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:45.316 [128/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.316 [129/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:45.316 [130/723] Linking static target lib/librte_mempool.a 00:01:45.316 [131/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:45.316 [132/723] Linking target lib/librte_telemetry.so.24.2 00:01:45.316 [133/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:45.316 [134/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:45.316 [135/723] Linking static target lib/librte_eal.a 00:01:45.316 [136/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:45.316 [137/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:45.316 [138/723] Linking static target lib/librte_cmdline.a 00:01:45.575 [139/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:45.575 [140/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:45.575 [141/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:45.575 [142/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:45.575 [143/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:45.575 [144/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:45.575 [145/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:45.575 [146/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:45.575 [147/723] Linking static target lib/librte_cfgfile.a 00:01:45.575 [148/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:45.575 [149/723] Linking static target lib/librte_metrics.a 00:01:45.575 [150/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:45.838 [151/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:45.838 [152/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:45.838 [153/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:45.838 [154/723] Linking static target lib/librte_rcu.a 00:01:45.838 [155/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:45.838 [156/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:45.838 [157/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:45.838 [158/723] Linking static target lib/librte_bitratestats.a 00:01:46.099 [159/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:46.099 [160/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:46.099 [161/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:46.099 [162/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:46.099 [163/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.099 [164/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:46.099 [165/723] Linking static target lib/librte_mbuf.a 00:01:46.099 [166/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:46.359 [167/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.359 [168/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.359 [169/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:46.359 [170/723] Linking static target lib/librte_timer.a 00:01:46.359 [171/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:46.359 [172/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.359 [173/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.359 [174/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:46.359 [175/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:46.359 [176/723] Linking static target lib/librte_bbdev.a 00:01:46.359 [177/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:46.621 [178/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:46.621 [179/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:46.621 [180/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:46.622 [181/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.622 [182/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:46.622 [183/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:46.622 [184/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:46.622 [185/723] Linking static target lib/librte_compressdev.a 00:01:46.622 [186/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:46.889 [187/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:46.889 [188/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.889 [189/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:46.889 [190/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:46.889 [191/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:46.889 [192/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:46.889 [193/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.465 [194/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.465 [195/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:47.465 [196/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:47.465 [197/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:47.465 [198/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:47.465 [199/723] Linking static target lib/librte_distributor.a 00:01:47.465 [200/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:47.465 [201/723] Linking static target lib/librte_dmadev.a 00:01:47.465 [202/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.728 [203/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:47.728 [204/723] Linking static target lib/librte_bpf.a 00:01:47.728 [205/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:47.728 [206/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:47.728 [207/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:47.728 [208/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:47.728 [209/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:47.728 [210/723] Linking static target lib/librte_dispatcher.a 00:01:47.728 [211/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:47.991 [212/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:47.991 [213/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:47.991 [214/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:47.991 [215/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:47.991 [216/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:47.991 [217/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.991 [218/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:47.991 [219/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:47.991 [220/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:47.991 [221/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:47.991 [222/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:47.991 [223/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:47.991 [224/723] Linking static target lib/librte_gro.a 00:01:47.991 [225/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:47.991 [226/723] Linking static target lib/librte_gpudev.a 00:01:47.991 [227/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:47.991 [228/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:47.991 [229/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:47.991 [230/723] Linking static target lib/librte_jobstats.a 00:01:47.991 [231/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.254 [232/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:48.254 [233/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.254 [234/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:48.254 [235/723] Linking static target lib/librte_gso.a 00:01:48.254 [236/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:48.518 [237/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.518 [238/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:48.518 [239/723] Linking static target lib/librte_latencystats.a 00:01:48.518 [240/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:48.518 [241/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.518 [242/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:48.518 [243/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:48.518 [244/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.518 [245/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.518 [246/723] Linking static target lib/librte_ip_frag.a 00:01:48.518 [247/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:48.781 [248/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:48.781 [249/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:48.781 [250/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:48.781 [251/723] Linking static target lib/librte_efd.a 00:01:48.781 [252/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:48.781 [253/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:48.781 [254/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.781 [255/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:48.781 [256/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:49.043 [257/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:49.043 [258/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.043 [259/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:49.043 [260/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.043 [261/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:49.043 [262/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:49.308 [263/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:49.308 [264/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:49.308 [265/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:49.308 [266/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:49.308 [267/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:49.308 [268/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:49.308 [269/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.569 [270/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:49.569 [271/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:49.569 [272/723] Linking static target lib/librte_regexdev.a 00:01:49.569 [273/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:49.569 [274/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:49.569 [275/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:49.569 [276/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:49.569 [277/723] Linking static target lib/librte_rawdev.a 00:01:49.569 [278/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:49.569 [279/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:49.569 [280/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:49.831 [281/723] Linking static target lib/librte_pcapng.a 00:01:49.831 [282/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:49.831 [283/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:49.831 [284/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:49.831 [285/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:49.831 [286/723] Linking static target lib/librte_power.a 00:01:49.831 [287/723] Linking static target lib/librte_lpm.a 00:01:49.831 [288/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:49.831 [289/723] Linking static target lib/librte_mldev.a 00:01:49.831 [290/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:49.831 [291/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:49.831 [292/723] Linking static target lib/librte_stack.a 00:01:49.831 [293/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:50.096 [294/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:50.096 [295/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:50.096 [296/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:50.096 [297/723] Linking static target lib/librte_reorder.a 00:01:50.096 [298/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.096 [299/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:50.096 [300/723] Linking static target lib/acl/libavx2_tmp.a 00:01:50.096 [301/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:50.096 [302/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.372 [303/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:50.372 [304/723] Linking static target lib/librte_security.a 00:01:50.372 [305/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.372 [306/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:50.372 [307/723] Linking static target lib/librte_cryptodev.a 00:01:50.372 [308/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:50.372 [309/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.372 [310/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:50.372 [311/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:50.372 [312/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:50.372 [313/723] Linking static target lib/librte_hash.a 00:01:50.638 [314/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.638 [315/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:50.638 [316/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:50.638 [317/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:50.638 [318/723] Linking static target lib/librte_rib.a 00:01:50.638 [319/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:50.638 [320/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:50.638 [321/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:50.638 [322/723] Linking static target lib/acl/libavx512_tmp.a 00:01:50.638 [323/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.638 [324/723] Linking static target lib/librte_acl.a 00:01:50.638 [325/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.638 [326/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:50.906 [327/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:50.906 [328/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:50.906 [329/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:50.906 [330/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:50.906 [331/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:50.906 [332/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:50.906 [333/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.906 [334/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:50.906 [335/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:50.906 [336/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:50.906 [337/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:51.167 [338/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:51.167 [339/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.430 [340/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:51.430 [341/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.430 [342/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:51.430 [343/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.690 [344/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:51.950 [345/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:51.950 [346/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:51.950 [347/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:51.950 [348/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:51.950 [349/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:51.950 [350/723] Linking static target lib/librte_eventdev.a 00:01:51.950 [351/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:52.217 [352/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.217 [353/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:52.217 [354/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:52.217 [355/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:52.217 [356/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:52.217 [357/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:52.217 [358/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:52.217 [359/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:52.217 [360/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:52.217 [361/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:52.217 [362/723] Linking static target lib/librte_member.a 00:01:52.217 [363/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:52.217 [364/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:52.481 [365/723] Linking static target lib/librte_sched.a 00:01:52.481 [366/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:52.481 [367/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.481 [368/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:52.481 [369/723] Linking static target lib/librte_fib.a 00:01:52.481 [370/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:52.481 [371/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:52.481 [372/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:52.481 [373/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:52.481 [374/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:52.481 [375/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:52.481 [376/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:52.748 [377/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:52.748 [378/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:52.748 [379/723] Linking static target lib/librte_ethdev.a 00:01:52.748 [380/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:52.748 [381/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:52.748 [382/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.748 [383/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:52.748 [384/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:52.748 [385/723] Linking static target lib/librte_ipsec.a 00:01:53.009 [386/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:53.009 [387/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.009 [388/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.009 [389/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.009 [390/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:53.270 [391/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:53.270 [392/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:53.270 [393/723] Linking static target lib/librte_pdump.a 00:01:53.270 [394/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:53.270 [395/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:53.538 [396/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.538 [397/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:53.538 [398/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:53.538 [399/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.538 [400/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:53.538 [401/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:53.538 [402/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:53.538 [403/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:53.538 [404/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:53.538 [405/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:53.538 [406/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:53.800 [407/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:53.800 [408/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.800 [409/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:53.800 [410/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:53.800 [411/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:53.800 [412/723] Linking static target lib/librte_pdcp.a 00:01:53.800 [413/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:53.800 [414/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:53.800 [415/723] Linking static target lib/librte_table.a 00:01:54.064 [416/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:54.064 [417/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:54.064 [418/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:54.064 [419/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:54.064 [420/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:54.324 [421/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:54.324 [422/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:54.324 [423/723] Linking static target lib/librte_graph.a 00:01:54.324 [424/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.588 [425/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.588 [426/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:54.588 [427/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:54.588 [428/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:54.588 [429/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:54.588 [430/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:54.589 [431/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:54.589 [432/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:54.849 [433/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:54.849 [434/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:54.849 [435/723] Linking static target lib/librte_port.a 00:01:54.849 [436/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:54.849 [437/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:54.849 [438/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:54.849 [439/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:54.849 [440/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.114 [441/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:55.114 [442/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:55.114 [443/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:55.114 [444/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.114 [445/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.114 [446/723] Linking static target drivers/librte_bus_vdev.a 00:01:55.378 [447/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:55.378 [448/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.378 [449/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.378 [450/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:55.378 [451/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.378 [452/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:55.378 [453/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:55.378 [454/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.642 [455/723] Linking static target drivers/librte_bus_pci.a 00:01:55.642 [456/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.642 [457/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:55.642 [458/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:55.642 [459/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.642 [460/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:55.642 [461/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:55.642 [462/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.642 [463/723] Linking static target lib/librte_node.a 00:01:55.642 [464/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:55.642 [465/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:55.642 [466/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:55.902 [467/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:55.902 [468/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:55.902 [469/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:55.902 [470/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:55.902 [471/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:55.902 [472/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:56.170 [473/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:56.170 [474/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:56.170 [475/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:56.170 [476/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:56.170 [477/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.170 [478/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:56.170 [479/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.170 [480/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:56.430 [481/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:56.430 [482/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.430 [483/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:56.430 [484/723] Linking target lib/librte_eal.so.24.2 00:01:56.430 [485/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:56.430 [486/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:56.430 [487/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:56.430 [488/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.430 [489/723] Linking static target drivers/librte_mempool_ring.a 00:01:56.430 [490/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:56.430 [491/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.694 [492/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:56.694 [493/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:56.694 [494/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:56.694 [495/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:56.694 [496/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:56.694 [497/723] Linking target lib/librte_ring.so.24.2 00:01:56.694 [498/723] Linking target lib/librte_meter.so.24.2 00:01:56.694 [499/723] Linking target lib/librte_timer.so.24.2 00:01:56.694 [500/723] Linking target lib/librte_pci.so.24.2 00:01:56.694 [501/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:56.954 [502/723] Linking target lib/librte_acl.so.24.2 00:01:56.954 [503/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:56.954 [504/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:56.954 [505/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:56.954 [506/723] Linking target lib/librte_cfgfile.so.24.2 00:01:56.954 [507/723] Linking target lib/librte_jobstats.so.24.2 00:01:56.954 [508/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:56.954 [509/723] Linking target lib/librte_dmadev.so.24.2 00:01:56.954 [510/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:56.954 [511/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:56.954 [512/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:56.954 [513/723] Linking target lib/librte_rawdev.so.24.2 00:01:56.954 [514/723] Linking target lib/librte_stack.so.24.2 00:01:56.954 [515/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:56.954 [516/723] Linking target drivers/librte_bus_vdev.so.24.2 00:01:56.954 [517/723] Linking target lib/librte_rcu.so.24.2 00:01:56.954 [518/723] Linking target lib/librte_mempool.so.24.2 00:01:56.954 [519/723] Linking target drivers/librte_bus_pci.so.24.2 00:01:56.954 [520/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:57.214 [521/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:57.214 [522/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:57.214 [523/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:57.214 [524/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:57.214 [525/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:57.214 [526/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:57.214 [527/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:57.214 [528/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:57.214 [529/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:57.214 [530/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:57.214 [531/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:57.214 [532/723] Linking target drivers/librte_mempool_ring.so.24.2 00:01:57.215 [533/723] Linking target lib/librte_rib.so.24.2 00:01:57.215 [534/723] Linking target lib/librte_mbuf.so.24.2 00:01:57.215 [535/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:57.479 [536/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:57.479 [537/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:57.479 [538/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:57.479 [539/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:57.746 [540/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:57.746 [541/723] Linking target lib/librte_fib.so.24.2 00:01:57.746 [542/723] Linking target lib/librte_net.so.24.2 00:01:57.746 [543/723] Linking target lib/librte_bbdev.so.24.2 00:01:57.746 [544/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:57.746 [545/723] Linking target lib/librte_compressdev.so.24.2 00:01:57.746 [546/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:57.746 [547/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:57.746 [548/723] Linking target lib/librte_cryptodev.so.24.2 00:01:57.746 [549/723] Linking target lib/librte_distributor.so.24.2 00:01:57.746 [550/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:57.746 [551/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:58.008 [552/723] Linking target lib/librte_gpudev.so.24.2 00:01:58.008 [553/723] Linking target lib/librte_regexdev.so.24.2 00:01:58.008 [554/723] Linking target lib/librte_mldev.so.24.2 00:01:58.008 [555/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:58.008 [556/723] Linking target lib/librte_reorder.so.24.2 00:01:58.008 [557/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:58.008 [558/723] Linking target lib/librte_sched.so.24.2 00:01:58.008 [559/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:58.008 [560/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:58.008 [561/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:58.008 [562/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:58.008 [563/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:58.008 [564/723] Linking target lib/librte_cmdline.so.24.2 00:01:58.008 [565/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:58.008 [566/723] Linking target lib/librte_hash.so.24.2 00:01:58.008 [567/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:58.008 [568/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:58.008 [569/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:58.008 [570/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:58.008 [571/723] Linking target lib/librte_security.so.24.2 00:01:58.272 [572/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:58.272 [573/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:58.272 [574/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:58.272 [575/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:58.272 [576/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:58.272 [577/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:58.272 [578/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:58.272 [579/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:58.272 [580/723] Linking target lib/librte_efd.so.24.2 00:01:58.272 [581/723] Linking target lib/librte_lpm.so.24.2 00:01:58.272 [582/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:58.272 [583/723] Linking target lib/librte_member.so.24.2 00:01:58.272 [584/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:58.272 [585/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:58.272 [586/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:58.536 [587/723] Linking target lib/librte_ipsec.so.24.2 00:01:58.536 [588/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:58.536 [589/723] Linking target lib/librte_pdcp.so.24.2 00:01:58.536 [590/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:58.536 [591/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:58.800 [592/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:58.800 [593/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:58.800 [594/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:58.800 [595/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:58.800 [596/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:58.800 [597/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:58.800 [598/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:58.800 [599/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:59.061 [600/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:59.061 [601/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:59.061 [602/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:59.061 [603/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:59.061 [604/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:59.324 [605/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:59.324 [606/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:59.324 [607/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:59.324 [608/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:59.324 [609/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:59.584 [610/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:59.584 [611/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:59.584 [612/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:59.584 [613/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:59.584 [614/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:59.584 [615/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:59.584 [616/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:59.584 [617/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:59.584 [618/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:59.843 [619/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:59.843 [620/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:59.843 [621/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:59.843 [622/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:00.102 [623/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:00.102 [624/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:00.361 [625/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:00.361 [626/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:00.361 [627/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:00.361 [628/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:00.361 [629/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:00.361 [630/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:00.361 [631/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:00.361 [632/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:00.361 [633/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:00.361 [634/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:00.361 [635/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:00.620 [636/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:00.620 [637/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:00.620 [638/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:00.620 [639/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.620 [640/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:00.620 [641/723] Linking target lib/librte_ethdev.so.24.2 00:02:00.879 [642/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:00.879 [643/723] Linking target lib/librte_metrics.so.24.2 00:02:00.879 [644/723] Linking target lib/librte_gso.so.24.2 00:02:00.879 [645/723] Linking target lib/librte_pcapng.so.24.2 00:02:00.879 [646/723] Linking target lib/librte_ip_frag.so.24.2 00:02:00.879 [647/723] Linking target lib/librte_gro.so.24.2 00:02:00.879 [648/723] Linking target lib/librte_bpf.so.24.2 00:02:00.879 [649/723] Linking target lib/librte_power.so.24.2 00:02:00.879 [650/723] Linking target lib/librte_eventdev.so.24.2 00:02:00.879 [651/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:00.879 [652/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:01.137 [653/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:02:01.137 [654/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:02:01.137 [655/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:01.137 [656/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:01.137 [657/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:02:01.137 [658/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:01.137 [659/723] Linking target lib/librte_graph.so.24.2 00:02:01.137 [660/723] Linking target lib/librte_bitratestats.so.24.2 00:02:01.137 [661/723] Linking target lib/librte_latencystats.so.24.2 00:02:01.137 [662/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:01.137 [663/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:02:01.137 [664/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:01.137 [665/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:01.137 [666/723] Linking target lib/librte_pdump.so.24.2 00:02:01.137 [667/723] Linking target lib/librte_dispatcher.so.24.2 00:02:01.137 [668/723] Linking target lib/librte_port.so.24.2 00:02:01.137 [669/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:02:01.137 [670/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:01.137 [671/723] Linking target lib/librte_node.so.24.2 00:02:01.395 [672/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:02:01.395 [673/723] Linking target lib/librte_table.so.24.2 00:02:01.395 [674/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:01.395 [675/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:01.395 [676/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:02:01.654 [677/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.654 [678/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:01.912 [679/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:01.912 [680/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:02.171 [681/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:02.171 [682/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:02.171 [683/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:02.428 [684/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:02.687 [685/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:02.687 [686/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:02.687 [687/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:02.687 [688/723] Linking static target drivers/librte_net_i40e.a 00:02:02.687 [689/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:03.253 [690/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.253 [691/723] Linking target drivers/librte_net_i40e.so.24.2 00:02:03.253 [692/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:04.199 [693/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:04.199 [694/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:05.170 [695/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:13.277 [696/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:13.277 [697/723] Linking static target lib/librte_pipeline.a 00:02:13.277 [698/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:13.277 [699/723] Linking static target lib/librte_vhost.a 00:02:13.277 [700/723] Linking target app/dpdk-pdump 00:02:13.277 [701/723] Linking target app/dpdk-test-acl 00:02:13.277 [702/723] Linking target app/dpdk-test-cmdline 00:02:13.277 [703/723] Linking target app/dpdk-test-pipeline 00:02:13.277 [704/723] Linking target app/dpdk-test-sad 00:02:13.277 [705/723] Linking target app/dpdk-dumpcap 00:02:13.277 [706/723] Linking target app/dpdk-test-regex 00:02:13.277 [707/723] Linking target app/dpdk-test-bbdev 00:02:13.277 [708/723] Linking target app/dpdk-test-mldev 00:02:13.277 [709/723] Linking target app/dpdk-test-gpudev 00:02:13.277 [710/723] Linking target app/dpdk-proc-info 00:02:13.277 [711/723] Linking target app/dpdk-test-crypto-perf 00:02:13.277 [712/723] Linking target app/dpdk-graph 00:02:13.277 [713/723] Linking target app/dpdk-test-dma-perf 00:02:13.277 [714/723] Linking target app/dpdk-test-fib 00:02:13.277 [715/723] Linking target app/dpdk-test-security-perf 00:02:13.277 [716/723] Linking target app/dpdk-test-eventdev 00:02:13.535 [717/723] Linking target app/dpdk-test-compress-perf 00:02:13.535 [718/723] Linking target app/dpdk-test-flow-perf 00:02:13.535 [719/723] Linking target app/dpdk-testpmd 00:02:13.793 [720/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.050 [721/723] Linking target lib/librte_vhost.so.24.2 00:02:14.984 [722/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.243 [723/723] Linking target lib/librte_pipeline.so.24.2 00:02:15.243 18:02:41 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:15.243 18:02:41 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:15.243 18:02:41 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:15.243 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:15.243 [0/1] Installing files. 00:02:15.508 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:15.508 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:15.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:15.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:15.512 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.513 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:15.514 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:15.514 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.514 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.514 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.514 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.514 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:15.515 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:16.088 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:16.088 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:16.088 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.088 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:16.088 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.088 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.088 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.088 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.088 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.088 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.088 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.088 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.088 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.088 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.088 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.088 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.088 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.089 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.089 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.089 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.089 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.089 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.089 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.089 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.089 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.090 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.091 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:16.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:16.093 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:16.093 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:16.093 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:16.093 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:16.093 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:02:16.093 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:16.093 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:16.093 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:16.093 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:16.093 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:16.093 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:16.093 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:16.093 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:16.093 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:16.093 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:16.093 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:16.093 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:16.093 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:16.093 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:16.093 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:16.093 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:16.093 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:16.093 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:16.093 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:16.093 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:16.093 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:16.093 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:16.093 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:16.093 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:16.093 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:16.093 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:16.093 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:16.093 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:16.093 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:16.093 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:16.093 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:16.093 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:16.093 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:16.093 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:16.093 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:16.093 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:16.093 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:16.093 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:16.093 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:16.093 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:16.093 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:16.093 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:16.093 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:16.093 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:16.093 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:16.093 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:16.094 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:16.094 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:16.094 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:16.094 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:16.094 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:16.094 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:16.094 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:16.094 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:16.094 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:16.094 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:16.094 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:16.094 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:16.094 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:16.094 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:16.094 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:16.094 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:16.094 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:16.094 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:16.094 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:16.094 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:16.094 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:16.094 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:16.094 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:16.094 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:16.094 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:16.094 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:16.094 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:16.094 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:16.094 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:16.094 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:16.094 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:16.094 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:16.094 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:16.094 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:16.094 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:16.094 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:16.094 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:16.094 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:16.094 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:16.094 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:16.094 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:16.094 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:16.094 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:16.094 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:16.094 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:16.094 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:16.094 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:16.094 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:16.094 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:16.094 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:16.094 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:16.094 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:16.094 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:16.094 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:16.094 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:16.094 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:16.094 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:16.094 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:16.094 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:16.094 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:16.094 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:16.094 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:16.094 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:16.094 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:02:16.094 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:02:16.094 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:02:16.094 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:02:16.094 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:02:16.094 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:02:16.094 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:02:16.094 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:02:16.094 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:02:16.094 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:02:16.094 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:02:16.094 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:02:16.094 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:02:16.094 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:02:16.094 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:02:16.094 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:02:16.094 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:02:16.094 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:02:16.094 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:02:16.094 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:02:16.094 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:02:16.094 18:02:42 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:16.094 18:02:42 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.094 00:02:16.094 real 0m39.909s 00:02:16.094 user 13m55.034s 00:02:16.094 sys 1m59.810s 00:02:16.094 18:02:42 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:16.094 18:02:42 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:16.094 ************************************ 00:02:16.094 END TEST build_native_dpdk 00:02:16.094 ************************************ 00:02:16.094 18:02:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:16.094 18:02:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:16.094 18:02:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:16.095 18:02:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:16.095 18:02:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:16.095 18:02:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:16.095 18:02:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:16.095 18:02:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:16.095 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:16.354 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:16.354 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:16.354 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:16.614 Using 'verbs' RDMA provider 00:02:27.162 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:37.140 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:37.140 Creating mk/config.mk...done. 00:02:37.140 Creating mk/cc.flags.mk...done. 00:02:37.140 Type 'make' to build. 00:02:37.140 18:03:01 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:37.140 18:03:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:37.140 18:03:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:37.140 18:03:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.140 ************************************ 00:02:37.140 START TEST make 00:02:37.140 ************************************ 00:02:37.140 18:03:01 make -- common/autotest_common.sh@1125 -- $ make -j48 00:02:37.140 make[1]: Nothing to be done for 'all'. 00:02:37.403 The Meson build system 00:02:37.403 Version: 1.3.1 00:02:37.403 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:37.403 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:37.403 Build type: native build 00:02:37.403 Project name: libvfio-user 00:02:37.403 Project version: 0.0.1 00:02:37.403 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:37.403 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:37.403 Host machine cpu family: x86_64 00:02:37.403 Host machine cpu: x86_64 00:02:37.403 Run-time dependency threads found: YES 00:02:37.403 Library dl found: YES 00:02:37.403 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:37.403 Run-time dependency json-c found: YES 0.17 00:02:37.403 Run-time dependency cmocka found: YES 1.1.7 00:02:37.403 Program pytest-3 found: NO 00:02:37.403 Program flake8 found: NO 00:02:37.403 Program misspell-fixer found: NO 00:02:37.403 Program restructuredtext-lint found: NO 00:02:37.403 Program valgrind found: YES (/usr/bin/valgrind) 00:02:37.403 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.403 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.403 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.403 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:37.403 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:37.403 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:37.403 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:37.403 Build targets in project: 8 00:02:37.403 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:37.403 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:37.403 00:02:37.403 libvfio-user 0.0.1 00:02:37.403 00:02:37.403 User defined options 00:02:37.403 buildtype : debug 00:02:37.403 default_library: shared 00:02:37.403 libdir : /usr/local/lib 00:02:37.403 00:02:37.403 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.362 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:38.362 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:38.362 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:38.362 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:38.362 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:38.362 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:38.362 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:38.362 [7/37] Compiling C object samples/null.p/null.c.o 00:02:38.362 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:38.362 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:38.362 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:38.362 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:38.362 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:38.633 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:38.633 [14/37] Compiling C object samples/server.p/server.c.o 00:02:38.633 [15/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:38.633 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:38.633 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:38.633 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:38.633 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:38.633 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:38.633 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:38.633 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:38.633 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:38.633 [24/37] Compiling C object samples/client.p/client.c.o 00:02:38.633 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:38.633 [26/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:38.633 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:38.633 [28/37] Linking target samples/client 00:02:38.633 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:38.633 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:38.895 [31/37] Linking target test/unit_tests 00:02:38.895 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:38.895 [33/37] Linking target samples/server 00:02:38.895 [34/37] Linking target samples/gpio-pci-idio-16 00:02:38.895 [35/37] Linking target samples/lspci 00:02:38.895 [36/37] Linking target samples/null 00:02:38.895 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:38.895 INFO: autodetecting backend as ninja 00:02:38.895 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:39.156 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:39.761 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:39.761 ninja: no work to do. 00:02:51.964 CC lib/ut_mock/mock.o 00:02:51.964 CC lib/log/log.o 00:02:51.964 CC lib/log/log_flags.o 00:02:51.964 CC lib/ut/ut.o 00:02:51.964 CC lib/log/log_deprecated.o 00:02:51.964 LIB libspdk_ut.a 00:02:51.964 LIB libspdk_ut_mock.a 00:02:51.964 LIB libspdk_log.a 00:02:51.964 SO libspdk_ut_mock.so.6.0 00:02:51.964 SO libspdk_ut.so.2.0 00:02:51.964 SO libspdk_log.so.7.0 00:02:51.964 SYMLINK libspdk_ut_mock.so 00:02:51.964 SYMLINK libspdk_ut.so 00:02:51.964 SYMLINK libspdk_log.so 00:02:51.964 CXX lib/trace_parser/trace.o 00:02:51.964 CC lib/ioat/ioat.o 00:02:51.964 CC lib/dma/dma.o 00:02:51.964 CC lib/util/base64.o 00:02:51.964 CC lib/util/bit_array.o 00:02:51.964 CC lib/util/cpuset.o 00:02:51.964 CC lib/util/crc16.o 00:02:51.964 CC lib/util/crc32.o 00:02:51.964 CC lib/util/crc32c.o 00:02:51.964 CC lib/util/crc32_ieee.o 00:02:51.964 CC lib/util/crc64.o 00:02:51.964 CC lib/util/dif.o 00:02:51.964 CC lib/util/fd.o 00:02:51.964 CC lib/util/fd_group.o 00:02:51.964 CC lib/util/file.o 00:02:51.964 CC lib/util/hexlify.o 00:02:51.964 CC lib/util/iov.o 00:02:51.964 CC lib/util/math.o 00:02:51.964 CC lib/util/net.o 00:02:51.964 CC lib/util/pipe.o 00:02:51.964 CC lib/util/strerror_tls.o 00:02:51.964 CC lib/util/string.o 00:02:51.964 CC lib/util/uuid.o 00:02:51.964 CC lib/util/xor.o 00:02:51.964 CC lib/util/zipf.o 00:02:51.964 CC lib/vfio_user/host/vfio_user_pci.o 00:02:51.964 CC lib/vfio_user/host/vfio_user.o 00:02:51.964 LIB libspdk_dma.a 00:02:51.964 SO libspdk_dma.so.4.0 00:02:51.964 SYMLINK libspdk_dma.so 00:02:51.964 LIB libspdk_ioat.a 00:02:51.964 SO libspdk_ioat.so.7.0 00:02:51.964 SYMLINK libspdk_ioat.so 00:02:51.964 LIB libspdk_vfio_user.a 00:02:51.964 SO libspdk_vfio_user.so.5.0 00:02:51.964 SYMLINK libspdk_vfio_user.so 00:02:52.222 LIB libspdk_util.a 00:02:52.222 SO libspdk_util.so.10.0 00:02:52.222 SYMLINK libspdk_util.so 00:02:52.480 CC lib/rdma_utils/rdma_utils.o 00:02:52.480 CC lib/idxd/idxd.o 00:02:52.480 CC lib/vmd/vmd.o 00:02:52.480 CC lib/rdma_provider/common.o 00:02:52.480 CC lib/env_dpdk/env.o 00:02:52.480 CC lib/idxd/idxd_user.o 00:02:52.480 CC lib/vmd/led.o 00:02:52.480 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:52.480 CC lib/idxd/idxd_kernel.o 00:02:52.480 CC lib/env_dpdk/memory.o 00:02:52.480 CC lib/env_dpdk/pci.o 00:02:52.480 CC lib/env_dpdk/init.o 00:02:52.480 CC lib/env_dpdk/threads.o 00:02:52.480 CC lib/json/json_parse.o 00:02:52.480 CC lib/conf/conf.o 00:02:52.480 CC lib/env_dpdk/pci_ioat.o 00:02:52.480 CC lib/json/json_util.o 00:02:52.480 CC lib/env_dpdk/pci_virtio.o 00:02:52.480 CC lib/json/json_write.o 00:02:52.480 CC lib/env_dpdk/pci_idxd.o 00:02:52.480 CC lib/env_dpdk/pci_vmd.o 00:02:52.480 CC lib/env_dpdk/pci_event.o 00:02:52.480 CC lib/env_dpdk/sigbus_handler.o 00:02:52.480 CC lib/env_dpdk/pci_dpdk.o 00:02:52.480 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:52.480 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:52.480 LIB libspdk_trace_parser.a 00:02:52.480 SO libspdk_trace_parser.so.5.0 00:02:52.739 LIB libspdk_rdma_provider.a 00:02:52.739 SO libspdk_rdma_provider.so.6.0 00:02:52.739 LIB libspdk_conf.a 00:02:52.739 SYMLINK libspdk_trace_parser.so 00:02:52.739 SO libspdk_conf.so.6.0 00:02:52.739 LIB libspdk_rdma_utils.a 00:02:52.739 SYMLINK libspdk_rdma_provider.so 00:02:52.739 SO libspdk_rdma_utils.so.1.0 00:02:52.739 SYMLINK libspdk_conf.so 00:02:52.997 SYMLINK libspdk_rdma_utils.so 00:02:52.997 LIB libspdk_json.a 00:02:52.997 SO libspdk_json.so.6.0 00:02:52.997 SYMLINK libspdk_json.so 00:02:52.997 LIB libspdk_idxd.a 00:02:52.997 CC lib/jsonrpc/jsonrpc_server.o 00:02:52.997 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:52.997 CC lib/jsonrpc/jsonrpc_client.o 00:02:52.997 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:53.255 SO libspdk_idxd.so.12.0 00:02:53.255 LIB libspdk_vmd.a 00:02:53.255 SO libspdk_vmd.so.6.0 00:02:53.256 SYMLINK libspdk_idxd.so 00:02:53.256 SYMLINK libspdk_vmd.so 00:02:53.514 LIB libspdk_jsonrpc.a 00:02:53.514 SO libspdk_jsonrpc.so.6.0 00:02:53.514 SYMLINK libspdk_jsonrpc.so 00:02:53.773 CC lib/rpc/rpc.o 00:02:53.773 LIB libspdk_rpc.a 00:02:54.031 LIB libspdk_env_dpdk.a 00:02:54.031 SO libspdk_rpc.so.6.0 00:02:54.031 SYMLINK libspdk_rpc.so 00:02:54.031 SO libspdk_env_dpdk.so.15.0 00:02:54.031 SYMLINK libspdk_env_dpdk.so 00:02:54.031 CC lib/keyring/keyring.o 00:02:54.031 CC lib/keyring/keyring_rpc.o 00:02:54.031 CC lib/notify/notify.o 00:02:54.031 CC lib/trace/trace.o 00:02:54.031 CC lib/notify/notify_rpc.o 00:02:54.031 CC lib/trace/trace_flags.o 00:02:54.031 CC lib/trace/trace_rpc.o 00:02:54.290 LIB libspdk_notify.a 00:02:54.290 SO libspdk_notify.so.6.0 00:02:54.290 LIB libspdk_keyring.a 00:02:54.290 SYMLINK libspdk_notify.so 00:02:54.290 LIB libspdk_trace.a 00:02:54.290 SO libspdk_keyring.so.1.0 00:02:54.548 SO libspdk_trace.so.10.0 00:02:54.548 SYMLINK libspdk_keyring.so 00:02:54.548 SYMLINK libspdk_trace.so 00:02:54.548 CC lib/sock/sock.o 00:02:54.548 CC lib/sock/sock_rpc.o 00:02:54.548 CC lib/thread/thread.o 00:02:54.548 CC lib/thread/iobuf.o 00:02:55.114 LIB libspdk_sock.a 00:02:55.114 SO libspdk_sock.so.10.0 00:02:55.114 SYMLINK libspdk_sock.so 00:02:55.373 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:55.373 CC lib/nvme/nvme_ctrlr.o 00:02:55.373 CC lib/nvme/nvme_fabric.o 00:02:55.373 CC lib/nvme/nvme_ns_cmd.o 00:02:55.373 CC lib/nvme/nvme_ns.o 00:02:55.373 CC lib/nvme/nvme_pcie_common.o 00:02:55.373 CC lib/nvme/nvme_pcie.o 00:02:55.373 CC lib/nvme/nvme_qpair.o 00:02:55.373 CC lib/nvme/nvme.o 00:02:55.373 CC lib/nvme/nvme_quirks.o 00:02:55.373 CC lib/nvme/nvme_transport.o 00:02:55.373 CC lib/nvme/nvme_discovery.o 00:02:55.373 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:55.373 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:55.373 CC lib/nvme/nvme_tcp.o 00:02:55.373 CC lib/nvme/nvme_opal.o 00:02:55.373 CC lib/nvme/nvme_io_msg.o 00:02:55.373 CC lib/nvme/nvme_poll_group.o 00:02:55.373 CC lib/nvme/nvme_zns.o 00:02:55.373 CC lib/nvme/nvme_stubs.o 00:02:55.373 CC lib/nvme/nvme_auth.o 00:02:55.373 CC lib/nvme/nvme_cuse.o 00:02:55.373 CC lib/nvme/nvme_vfio_user.o 00:02:55.373 CC lib/nvme/nvme_rdma.o 00:02:56.308 LIB libspdk_thread.a 00:02:56.308 SO libspdk_thread.so.10.1 00:02:56.308 SYMLINK libspdk_thread.so 00:02:56.566 CC lib/accel/accel.o 00:02:56.566 CC lib/vfu_tgt/tgt_endpoint.o 00:02:56.566 CC lib/virtio/virtio.o 00:02:56.566 CC lib/init/json_config.o 00:02:56.566 CC lib/vfu_tgt/tgt_rpc.o 00:02:56.566 CC lib/blob/blobstore.o 00:02:56.566 CC lib/virtio/virtio_vhost_user.o 00:02:56.566 CC lib/init/subsystem.o 00:02:56.566 CC lib/accel/accel_rpc.o 00:02:56.566 CC lib/accel/accel_sw.o 00:02:56.566 CC lib/blob/request.o 00:02:56.566 CC lib/virtio/virtio_vfio_user.o 00:02:56.566 CC lib/init/subsystem_rpc.o 00:02:56.566 CC lib/blob/zeroes.o 00:02:56.566 CC lib/virtio/virtio_pci.o 00:02:56.566 CC lib/init/rpc.o 00:02:56.566 CC lib/blob/blob_bs_dev.o 00:02:56.824 LIB libspdk_init.a 00:02:56.824 SO libspdk_init.so.5.0 00:02:56.824 LIB libspdk_virtio.a 00:02:56.824 LIB libspdk_vfu_tgt.a 00:02:56.824 SYMLINK libspdk_init.so 00:02:56.824 SO libspdk_virtio.so.7.0 00:02:56.824 SO libspdk_vfu_tgt.so.3.0 00:02:56.824 SYMLINK libspdk_vfu_tgt.so 00:02:56.824 SYMLINK libspdk_virtio.so 00:02:57.082 CC lib/event/app.o 00:02:57.082 CC lib/event/reactor.o 00:02:57.082 CC lib/event/log_rpc.o 00:02:57.082 CC lib/event/app_rpc.o 00:02:57.082 CC lib/event/scheduler_static.o 00:02:57.340 LIB libspdk_event.a 00:02:57.340 SO libspdk_event.so.14.0 00:02:57.598 LIB libspdk_accel.a 00:02:57.598 SYMLINK libspdk_event.so 00:02:57.598 SO libspdk_accel.so.16.0 00:02:57.598 SYMLINK libspdk_accel.so 00:02:57.598 LIB libspdk_nvme.a 00:02:57.856 CC lib/bdev/bdev.o 00:02:57.856 SO libspdk_nvme.so.13.1 00:02:57.856 CC lib/bdev/bdev_rpc.o 00:02:57.856 CC lib/bdev/bdev_zone.o 00:02:57.856 CC lib/bdev/part.o 00:02:57.856 CC lib/bdev/scsi_nvme.o 00:02:58.115 SYMLINK libspdk_nvme.so 00:02:59.488 LIB libspdk_blob.a 00:02:59.488 SO libspdk_blob.so.11.0 00:02:59.747 SYMLINK libspdk_blob.so 00:02:59.747 CC lib/blobfs/blobfs.o 00:02:59.747 CC lib/blobfs/tree.o 00:02:59.747 CC lib/lvol/lvol.o 00:03:00.312 LIB libspdk_bdev.a 00:03:00.312 SO libspdk_bdev.so.16.0 00:03:00.577 SYMLINK libspdk_bdev.so 00:03:00.577 LIB libspdk_blobfs.a 00:03:00.577 CC lib/scsi/dev.o 00:03:00.577 CC lib/nbd/nbd.o 00:03:00.577 CC lib/ublk/ublk.o 00:03:00.577 CC lib/nvmf/ctrlr.o 00:03:00.577 CC lib/scsi/lun.o 00:03:00.577 CC lib/nbd/nbd_rpc.o 00:03:00.577 CC lib/ftl/ftl_core.o 00:03:00.577 CC lib/nvmf/ctrlr_discovery.o 00:03:00.577 CC lib/ublk/ublk_rpc.o 00:03:00.577 CC lib/scsi/port.o 00:03:00.577 CC lib/ftl/ftl_init.o 00:03:00.577 CC lib/nvmf/ctrlr_bdev.o 00:03:00.577 CC lib/scsi/scsi.o 00:03:00.577 CC lib/nvmf/subsystem.o 00:03:00.577 CC lib/ftl/ftl_layout.o 00:03:00.577 CC lib/ftl/ftl_debug.o 00:03:00.577 CC lib/nvmf/nvmf.o 00:03:00.577 CC lib/scsi/scsi_bdev.o 00:03:00.577 CC lib/ftl/ftl_io.o 00:03:00.577 CC lib/nvmf/nvmf_rpc.o 00:03:00.577 CC lib/scsi/scsi_pr.o 00:03:00.577 CC lib/scsi/scsi_rpc.o 00:03:00.577 CC lib/ftl/ftl_sb.o 00:03:00.577 CC lib/nvmf/transport.o 00:03:00.577 CC lib/ftl/ftl_l2p.o 00:03:00.577 CC lib/nvmf/tcp.o 00:03:00.577 CC lib/ftl/ftl_l2p_flat.o 00:03:00.577 CC lib/scsi/task.o 00:03:00.577 CC lib/nvmf/stubs.o 00:03:00.577 CC lib/ftl/ftl_nv_cache.o 00:03:00.577 CC lib/ftl/ftl_band.o 00:03:00.577 CC lib/nvmf/mdns_server.o 00:03:00.577 CC lib/nvmf/vfio_user.o 00:03:00.577 CC lib/ftl/ftl_band_ops.o 00:03:00.577 CC lib/nvmf/rdma.o 00:03:00.577 CC lib/nvmf/auth.o 00:03:00.577 CC lib/ftl/ftl_writer.o 00:03:00.577 CC lib/ftl/ftl_rq.o 00:03:00.577 CC lib/ftl/ftl_reloc.o 00:03:00.577 CC lib/ftl/ftl_l2p_cache.o 00:03:00.577 CC lib/ftl/ftl_p2l.o 00:03:00.577 CC lib/ftl/mngt/ftl_mngt.o 00:03:00.577 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:00.577 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:00.577 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:00.577 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:00.577 SO libspdk_blobfs.so.10.0 00:03:00.837 LIB libspdk_lvol.a 00:03:00.837 SYMLINK libspdk_blobfs.so 00:03:00.837 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:00.837 SO libspdk_lvol.so.10.0 00:03:01.094 SYMLINK libspdk_lvol.so 00:03:01.094 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:01.094 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:01.094 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:01.094 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:01.094 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:01.094 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:01.094 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:01.094 CC lib/ftl/utils/ftl_conf.o 00:03:01.094 CC lib/ftl/utils/ftl_md.o 00:03:01.094 CC lib/ftl/utils/ftl_mempool.o 00:03:01.094 CC lib/ftl/utils/ftl_bitmap.o 00:03:01.094 CC lib/ftl/utils/ftl_property.o 00:03:01.094 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:01.094 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:01.094 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:01.094 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:01.094 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:01.353 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:01.353 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:01.353 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:01.353 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:01.353 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:01.353 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:01.353 CC lib/ftl/base/ftl_base_dev.o 00:03:01.353 CC lib/ftl/base/ftl_base_bdev.o 00:03:01.353 CC lib/ftl/ftl_trace.o 00:03:01.610 LIB libspdk_nbd.a 00:03:01.610 SO libspdk_nbd.so.7.0 00:03:01.610 LIB libspdk_scsi.a 00:03:01.610 SYMLINK libspdk_nbd.so 00:03:01.610 SO libspdk_scsi.so.9.0 00:03:01.868 SYMLINK libspdk_scsi.so 00:03:01.868 LIB libspdk_ublk.a 00:03:01.868 SO libspdk_ublk.so.3.0 00:03:01.868 SYMLINK libspdk_ublk.so 00:03:01.868 CC lib/vhost/vhost.o 00:03:01.868 CC lib/iscsi/conn.o 00:03:01.868 CC lib/vhost/vhost_rpc.o 00:03:01.868 CC lib/iscsi/init_grp.o 00:03:01.868 CC lib/vhost/vhost_blk.o 00:03:01.868 CC lib/iscsi/md5.o 00:03:01.868 CC lib/vhost/vhost_scsi.o 00:03:01.868 CC lib/iscsi/iscsi.o 00:03:01.868 CC lib/vhost/rte_vhost_user.o 00:03:01.868 CC lib/iscsi/param.o 00:03:01.868 CC lib/iscsi/portal_grp.o 00:03:01.868 CC lib/iscsi/tgt_node.o 00:03:01.868 CC lib/iscsi/iscsi_subsystem.o 00:03:01.868 CC lib/iscsi/iscsi_rpc.o 00:03:01.868 CC lib/iscsi/task.o 00:03:02.125 LIB libspdk_ftl.a 00:03:02.125 SO libspdk_ftl.so.9.0 00:03:02.690 SYMLINK libspdk_ftl.so 00:03:03.255 LIB libspdk_vhost.a 00:03:03.255 SO libspdk_vhost.so.8.0 00:03:03.255 SYMLINK libspdk_vhost.so 00:03:03.255 LIB libspdk_nvmf.a 00:03:03.255 LIB libspdk_iscsi.a 00:03:03.255 SO libspdk_nvmf.so.19.0 00:03:03.255 SO libspdk_iscsi.so.8.0 00:03:03.514 SYMLINK libspdk_iscsi.so 00:03:03.514 SYMLINK libspdk_nvmf.so 00:03:03.772 CC module/vfu_device/vfu_virtio.o 00:03:03.772 CC module/env_dpdk/env_dpdk_rpc.o 00:03:03.772 CC module/vfu_device/vfu_virtio_blk.o 00:03:03.772 CC module/vfu_device/vfu_virtio_scsi.o 00:03:03.772 CC module/vfu_device/vfu_virtio_rpc.o 00:03:04.032 CC module/accel/dsa/accel_dsa.o 00:03:04.032 CC module/accel/iaa/accel_iaa.o 00:03:04.032 CC module/accel/dsa/accel_dsa_rpc.o 00:03:04.032 CC module/blob/bdev/blob_bdev.o 00:03:04.032 CC module/scheduler/gscheduler/gscheduler.o 00:03:04.032 CC module/accel/iaa/accel_iaa_rpc.o 00:03:04.032 CC module/accel/ioat/accel_ioat.o 00:03:04.032 CC module/accel/ioat/accel_ioat_rpc.o 00:03:04.032 CC module/keyring/file/keyring.o 00:03:04.032 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:04.032 CC module/keyring/file/keyring_rpc.o 00:03:04.032 CC module/keyring/linux/keyring.o 00:03:04.032 CC module/keyring/linux/keyring_rpc.o 00:03:04.032 CC module/accel/error/accel_error.o 00:03:04.032 CC module/accel/error/accel_error_rpc.o 00:03:04.032 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:04.032 CC module/sock/posix/posix.o 00:03:04.032 LIB libspdk_env_dpdk_rpc.a 00:03:04.032 SO libspdk_env_dpdk_rpc.so.6.0 00:03:04.032 SYMLINK libspdk_env_dpdk_rpc.so 00:03:04.032 LIB libspdk_keyring_linux.a 00:03:04.032 LIB libspdk_scheduler_gscheduler.a 00:03:04.032 LIB libspdk_keyring_file.a 00:03:04.032 LIB libspdk_scheduler_dpdk_governor.a 00:03:04.032 SO libspdk_keyring_linux.so.1.0 00:03:04.032 SO libspdk_scheduler_gscheduler.so.4.0 00:03:04.032 SO libspdk_keyring_file.so.1.0 00:03:04.032 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:04.032 LIB libspdk_accel_error.a 00:03:04.032 LIB libspdk_accel_ioat.a 00:03:04.032 LIB libspdk_scheduler_dynamic.a 00:03:04.327 LIB libspdk_accel_iaa.a 00:03:04.327 SO libspdk_accel_error.so.2.0 00:03:04.327 SO libspdk_accel_ioat.so.6.0 00:03:04.327 SO libspdk_scheduler_dynamic.so.4.0 00:03:04.327 SYMLINK libspdk_scheduler_gscheduler.so 00:03:04.327 SYMLINK libspdk_keyring_linux.so 00:03:04.327 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:04.327 SYMLINK libspdk_keyring_file.so 00:03:04.327 SO libspdk_accel_iaa.so.3.0 00:03:04.327 LIB libspdk_accel_dsa.a 00:03:04.327 SYMLINK libspdk_accel_error.so 00:03:04.327 SYMLINK libspdk_scheduler_dynamic.so 00:03:04.327 LIB libspdk_blob_bdev.a 00:03:04.327 SYMLINK libspdk_accel_ioat.so 00:03:04.327 SO libspdk_accel_dsa.so.5.0 00:03:04.327 SYMLINK libspdk_accel_iaa.so 00:03:04.327 SO libspdk_blob_bdev.so.11.0 00:03:04.327 SYMLINK libspdk_accel_dsa.so 00:03:04.327 SYMLINK libspdk_blob_bdev.so 00:03:04.587 LIB libspdk_vfu_device.a 00:03:04.587 SO libspdk_vfu_device.so.3.0 00:03:04.587 CC module/bdev/null/bdev_null.o 00:03:04.587 CC module/bdev/gpt/gpt.o 00:03:04.587 CC module/bdev/malloc/bdev_malloc.o 00:03:04.588 CC module/bdev/null/bdev_null_rpc.o 00:03:04.588 CC module/bdev/gpt/vbdev_gpt.o 00:03:04.588 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:04.588 CC module/blobfs/bdev/blobfs_bdev.o 00:03:04.588 CC module/bdev/delay/vbdev_delay.o 00:03:04.588 CC module/bdev/aio/bdev_aio.o 00:03:04.588 CC module/bdev/aio/bdev_aio_rpc.o 00:03:04.588 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:04.588 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:04.588 CC module/bdev/split/vbdev_split.o 00:03:04.588 CC module/bdev/passthru/vbdev_passthru.o 00:03:04.588 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:04.588 CC module/bdev/split/vbdev_split_rpc.o 00:03:04.588 CC module/bdev/lvol/vbdev_lvol.o 00:03:04.588 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:04.588 CC module/bdev/iscsi/bdev_iscsi.o 00:03:04.588 CC module/bdev/error/vbdev_error.o 00:03:04.588 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:04.588 CC module/bdev/raid/bdev_raid.o 00:03:04.588 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:04.588 CC module/bdev/nvme/bdev_nvme.o 00:03:04.588 CC module/bdev/error/vbdev_error_rpc.o 00:03:04.588 CC module/bdev/raid/bdev_raid_rpc.o 00:03:04.588 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:04.588 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:04.588 CC module/bdev/raid/bdev_raid_sb.o 00:03:04.588 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:04.588 CC module/bdev/nvme/nvme_rpc.o 00:03:04.588 CC module/bdev/raid/raid0.o 00:03:04.588 CC module/bdev/nvme/bdev_mdns_client.o 00:03:04.588 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:04.588 CC module/bdev/nvme/vbdev_opal.o 00:03:04.588 CC module/bdev/raid/raid1.o 00:03:04.588 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:04.588 CC module/bdev/raid/concat.o 00:03:04.588 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:04.588 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:04.588 CC module/bdev/ftl/bdev_ftl.o 00:03:04.588 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:04.588 SYMLINK libspdk_vfu_device.so 00:03:04.845 LIB libspdk_sock_posix.a 00:03:04.845 SO libspdk_sock_posix.so.6.0 00:03:04.845 LIB libspdk_bdev_null.a 00:03:04.845 LIB libspdk_blobfs_bdev.a 00:03:04.845 SO libspdk_blobfs_bdev.so.6.0 00:03:04.845 SO libspdk_bdev_null.so.6.0 00:03:04.845 SYMLINK libspdk_sock_posix.so 00:03:05.102 LIB libspdk_bdev_split.a 00:03:05.102 SYMLINK libspdk_bdev_null.so 00:03:05.102 LIB libspdk_bdev_error.a 00:03:05.102 SO libspdk_bdev_split.so.6.0 00:03:05.102 SO libspdk_bdev_error.so.6.0 00:03:05.102 SYMLINK libspdk_blobfs_bdev.so 00:03:05.102 LIB libspdk_bdev_delay.a 00:03:05.102 LIB libspdk_bdev_ftl.a 00:03:05.102 LIB libspdk_bdev_gpt.a 00:03:05.102 SO libspdk_bdev_delay.so.6.0 00:03:05.102 SO libspdk_bdev_ftl.so.6.0 00:03:05.102 LIB libspdk_bdev_aio.a 00:03:05.102 SYMLINK libspdk_bdev_split.so 00:03:05.102 SYMLINK libspdk_bdev_error.so 00:03:05.102 SO libspdk_bdev_gpt.so.6.0 00:03:05.102 LIB libspdk_bdev_passthru.a 00:03:05.102 LIB libspdk_bdev_malloc.a 00:03:05.103 SO libspdk_bdev_aio.so.6.0 00:03:05.103 LIB libspdk_bdev_zone_block.a 00:03:05.103 SYMLINK libspdk_bdev_delay.so 00:03:05.103 SO libspdk_bdev_passthru.so.6.0 00:03:05.103 SYMLINK libspdk_bdev_ftl.so 00:03:05.103 SO libspdk_bdev_malloc.so.6.0 00:03:05.103 SO libspdk_bdev_zone_block.so.6.0 00:03:05.103 SYMLINK libspdk_bdev_gpt.so 00:03:05.103 LIB libspdk_bdev_iscsi.a 00:03:05.103 SYMLINK libspdk_bdev_aio.so 00:03:05.103 SO libspdk_bdev_iscsi.so.6.0 00:03:05.103 SYMLINK libspdk_bdev_passthru.so 00:03:05.103 SYMLINK libspdk_bdev_malloc.so 00:03:05.359 SYMLINK libspdk_bdev_zone_block.so 00:03:05.359 SYMLINK libspdk_bdev_iscsi.so 00:03:05.359 LIB libspdk_bdev_virtio.a 00:03:05.359 LIB libspdk_bdev_lvol.a 00:03:05.359 SO libspdk_bdev_virtio.so.6.0 00:03:05.359 SO libspdk_bdev_lvol.so.6.0 00:03:05.359 SYMLINK libspdk_bdev_virtio.so 00:03:05.359 SYMLINK libspdk_bdev_lvol.so 00:03:05.924 LIB libspdk_bdev_raid.a 00:03:05.924 SO libspdk_bdev_raid.so.6.0 00:03:05.924 SYMLINK libspdk_bdev_raid.so 00:03:06.867 LIB libspdk_bdev_nvme.a 00:03:07.124 SO libspdk_bdev_nvme.so.7.0 00:03:07.124 SYMLINK libspdk_bdev_nvme.so 00:03:07.380 CC module/event/subsystems/keyring/keyring.o 00:03:07.380 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:07.380 CC module/event/subsystems/scheduler/scheduler.o 00:03:07.380 CC module/event/subsystems/iobuf/iobuf.o 00:03:07.380 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:07.380 CC module/event/subsystems/sock/sock.o 00:03:07.380 CC module/event/subsystems/vmd/vmd.o 00:03:07.380 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:07.380 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:07.638 LIB libspdk_event_keyring.a 00:03:07.638 LIB libspdk_event_vhost_blk.a 00:03:07.638 LIB libspdk_event_scheduler.a 00:03:07.638 LIB libspdk_event_vfu_tgt.a 00:03:07.638 LIB libspdk_event_vmd.a 00:03:07.638 LIB libspdk_event_sock.a 00:03:07.638 SO libspdk_event_keyring.so.1.0 00:03:07.638 LIB libspdk_event_iobuf.a 00:03:07.638 SO libspdk_event_vhost_blk.so.3.0 00:03:07.638 SO libspdk_event_vfu_tgt.so.3.0 00:03:07.638 SO libspdk_event_scheduler.so.4.0 00:03:07.638 SO libspdk_event_sock.so.5.0 00:03:07.638 SO libspdk_event_vmd.so.6.0 00:03:07.638 SO libspdk_event_iobuf.so.3.0 00:03:07.638 SYMLINK libspdk_event_keyring.so 00:03:07.638 SYMLINK libspdk_event_vhost_blk.so 00:03:07.638 SYMLINK libspdk_event_vfu_tgt.so 00:03:07.638 SYMLINK libspdk_event_scheduler.so 00:03:07.638 SYMLINK libspdk_event_sock.so 00:03:07.638 SYMLINK libspdk_event_vmd.so 00:03:07.638 SYMLINK libspdk_event_iobuf.so 00:03:07.896 CC module/event/subsystems/accel/accel.o 00:03:08.155 LIB libspdk_event_accel.a 00:03:08.155 SO libspdk_event_accel.so.6.0 00:03:08.155 SYMLINK libspdk_event_accel.so 00:03:08.413 CC module/event/subsystems/bdev/bdev.o 00:03:08.413 LIB libspdk_event_bdev.a 00:03:08.672 SO libspdk_event_bdev.so.6.0 00:03:08.672 SYMLINK libspdk_event_bdev.so 00:03:08.672 CC module/event/subsystems/nbd/nbd.o 00:03:08.672 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:08.672 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:08.672 CC module/event/subsystems/scsi/scsi.o 00:03:08.672 CC module/event/subsystems/ublk/ublk.o 00:03:08.931 LIB libspdk_event_nbd.a 00:03:08.931 LIB libspdk_event_ublk.a 00:03:08.931 LIB libspdk_event_scsi.a 00:03:08.931 SO libspdk_event_nbd.so.6.0 00:03:08.931 SO libspdk_event_ublk.so.3.0 00:03:08.931 SO libspdk_event_scsi.so.6.0 00:03:08.931 SYMLINK libspdk_event_nbd.so 00:03:08.931 SYMLINK libspdk_event_ublk.so 00:03:08.931 SYMLINK libspdk_event_scsi.so 00:03:08.931 LIB libspdk_event_nvmf.a 00:03:08.931 SO libspdk_event_nvmf.so.6.0 00:03:09.189 SYMLINK libspdk_event_nvmf.so 00:03:09.189 CC module/event/subsystems/iscsi/iscsi.o 00:03:09.189 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:09.447 LIB libspdk_event_vhost_scsi.a 00:03:09.447 LIB libspdk_event_iscsi.a 00:03:09.447 SO libspdk_event_vhost_scsi.so.3.0 00:03:09.447 SO libspdk_event_iscsi.so.6.0 00:03:09.447 SYMLINK libspdk_event_vhost_scsi.so 00:03:09.447 SYMLINK libspdk_event_iscsi.so 00:03:09.447 SO libspdk.so.6.0 00:03:09.447 SYMLINK libspdk.so 00:03:09.712 CC app/trace_record/trace_record.o 00:03:09.712 CXX app/trace/trace.o 00:03:09.712 CC test/rpc_client/rpc_client_test.o 00:03:09.712 CC app/spdk_nvme_perf/perf.o 00:03:09.712 CC app/spdk_nvme_discover/discovery_aer.o 00:03:09.712 CC app/spdk_lspci/spdk_lspci.o 00:03:09.712 CC app/spdk_top/spdk_top.o 00:03:09.712 CC app/spdk_nvme_identify/identify.o 00:03:09.712 TEST_HEADER include/spdk/accel.h 00:03:09.712 TEST_HEADER include/spdk/accel_module.h 00:03:09.712 TEST_HEADER include/spdk/assert.h 00:03:09.712 TEST_HEADER include/spdk/barrier.h 00:03:09.712 TEST_HEADER include/spdk/base64.h 00:03:09.712 TEST_HEADER include/spdk/bdev_module.h 00:03:09.712 TEST_HEADER include/spdk/bdev.h 00:03:09.712 TEST_HEADER include/spdk/bdev_zone.h 00:03:09.712 TEST_HEADER include/spdk/bit_pool.h 00:03:09.712 TEST_HEADER include/spdk/bit_array.h 00:03:09.712 TEST_HEADER include/spdk/blob_bdev.h 00:03:09.712 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:09.712 TEST_HEADER include/spdk/blobfs.h 00:03:09.712 TEST_HEADER include/spdk/blob.h 00:03:09.712 TEST_HEADER include/spdk/conf.h 00:03:09.712 TEST_HEADER include/spdk/config.h 00:03:09.712 TEST_HEADER include/spdk/cpuset.h 00:03:09.712 TEST_HEADER include/spdk/crc16.h 00:03:09.712 TEST_HEADER include/spdk/crc32.h 00:03:09.712 TEST_HEADER include/spdk/dif.h 00:03:09.712 TEST_HEADER include/spdk/crc64.h 00:03:09.712 TEST_HEADER include/spdk/dma.h 00:03:09.712 TEST_HEADER include/spdk/env_dpdk.h 00:03:09.712 TEST_HEADER include/spdk/endian.h 00:03:09.712 TEST_HEADER include/spdk/env.h 00:03:09.712 TEST_HEADER include/spdk/event.h 00:03:09.712 TEST_HEADER include/spdk/fd_group.h 00:03:09.712 TEST_HEADER include/spdk/fd.h 00:03:09.712 TEST_HEADER include/spdk/file.h 00:03:09.712 TEST_HEADER include/spdk/ftl.h 00:03:09.712 TEST_HEADER include/spdk/gpt_spec.h 00:03:09.712 TEST_HEADER include/spdk/hexlify.h 00:03:09.712 TEST_HEADER include/spdk/histogram_data.h 00:03:09.712 TEST_HEADER include/spdk/idxd.h 00:03:09.712 TEST_HEADER include/spdk/idxd_spec.h 00:03:09.712 TEST_HEADER include/spdk/init.h 00:03:09.712 TEST_HEADER include/spdk/ioat.h 00:03:09.712 TEST_HEADER include/spdk/ioat_spec.h 00:03:09.712 TEST_HEADER include/spdk/iscsi_spec.h 00:03:09.712 TEST_HEADER include/spdk/json.h 00:03:09.712 TEST_HEADER include/spdk/jsonrpc.h 00:03:09.712 TEST_HEADER include/spdk/keyring_module.h 00:03:09.712 TEST_HEADER include/spdk/keyring.h 00:03:09.712 TEST_HEADER include/spdk/likely.h 00:03:09.712 TEST_HEADER include/spdk/log.h 00:03:09.712 TEST_HEADER include/spdk/lvol.h 00:03:09.712 TEST_HEADER include/spdk/memory.h 00:03:09.712 TEST_HEADER include/spdk/mmio.h 00:03:09.712 TEST_HEADER include/spdk/nbd.h 00:03:09.712 TEST_HEADER include/spdk/net.h 00:03:09.712 TEST_HEADER include/spdk/notify.h 00:03:09.712 TEST_HEADER include/spdk/nvme.h 00:03:09.712 TEST_HEADER include/spdk/nvme_intel.h 00:03:09.712 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:09.712 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:09.712 TEST_HEADER include/spdk/nvme_spec.h 00:03:09.712 TEST_HEADER include/spdk/nvme_zns.h 00:03:09.712 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:09.712 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:09.712 TEST_HEADER include/spdk/nvmf.h 00:03:09.712 TEST_HEADER include/spdk/nvmf_spec.h 00:03:09.712 TEST_HEADER include/spdk/nvmf_transport.h 00:03:09.712 TEST_HEADER include/spdk/opal.h 00:03:09.712 TEST_HEADER include/spdk/opal_spec.h 00:03:09.712 TEST_HEADER include/spdk/pci_ids.h 00:03:09.712 TEST_HEADER include/spdk/pipe.h 00:03:09.712 TEST_HEADER include/spdk/queue.h 00:03:09.712 TEST_HEADER include/spdk/reduce.h 00:03:09.712 TEST_HEADER include/spdk/rpc.h 00:03:09.712 TEST_HEADER include/spdk/scheduler.h 00:03:09.712 TEST_HEADER include/spdk/scsi.h 00:03:09.712 TEST_HEADER include/spdk/scsi_spec.h 00:03:09.712 TEST_HEADER include/spdk/sock.h 00:03:09.712 TEST_HEADER include/spdk/stdinc.h 00:03:09.712 TEST_HEADER include/spdk/string.h 00:03:09.712 TEST_HEADER include/spdk/thread.h 00:03:09.712 TEST_HEADER include/spdk/trace.h 00:03:09.712 TEST_HEADER include/spdk/trace_parser.h 00:03:09.712 TEST_HEADER include/spdk/tree.h 00:03:09.712 TEST_HEADER include/spdk/ublk.h 00:03:09.712 TEST_HEADER include/spdk/util.h 00:03:09.712 TEST_HEADER include/spdk/uuid.h 00:03:09.712 TEST_HEADER include/spdk/version.h 00:03:09.712 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:09.712 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:09.712 TEST_HEADER include/spdk/vhost.h 00:03:09.712 TEST_HEADER include/spdk/vmd.h 00:03:09.712 TEST_HEADER include/spdk/zipf.h 00:03:09.712 TEST_HEADER include/spdk/xor.h 00:03:09.712 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:09.712 CXX test/cpp_headers/accel_module.o 00:03:09.712 CXX test/cpp_headers/accel.o 00:03:09.712 CXX test/cpp_headers/assert.o 00:03:09.712 CXX test/cpp_headers/barrier.o 00:03:09.712 CXX test/cpp_headers/base64.o 00:03:09.712 CXX test/cpp_headers/bdev.o 00:03:09.712 CXX test/cpp_headers/bdev_module.o 00:03:09.712 CXX test/cpp_headers/bdev_zone.o 00:03:09.712 CXX test/cpp_headers/bit_array.o 00:03:09.712 CXX test/cpp_headers/bit_pool.o 00:03:09.712 CXX test/cpp_headers/blob_bdev.o 00:03:09.712 CXX test/cpp_headers/blobfs_bdev.o 00:03:09.712 CXX test/cpp_headers/blobfs.o 00:03:09.712 CXX test/cpp_headers/blob.o 00:03:09.712 CXX test/cpp_headers/conf.o 00:03:09.712 CC app/spdk_dd/spdk_dd.o 00:03:09.712 CXX test/cpp_headers/config.o 00:03:09.712 CXX test/cpp_headers/cpuset.o 00:03:09.712 CC app/iscsi_tgt/iscsi_tgt.o 00:03:09.712 CXX test/cpp_headers/crc16.o 00:03:09.712 CC app/nvmf_tgt/nvmf_main.o 00:03:09.712 CXX test/cpp_headers/crc32.o 00:03:09.712 CC app/spdk_tgt/spdk_tgt.o 00:03:09.712 CC test/app/histogram_perf/histogram_perf.o 00:03:09.712 CC test/app/stub/stub.o 00:03:09.712 CC test/app/jsoncat/jsoncat.o 00:03:09.712 CC test/env/vtophys/vtophys.o 00:03:09.712 CC test/env/pci/pci_ut.o 00:03:09.712 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:09.712 CC examples/ioat/perf/perf.o 00:03:09.712 CC test/thread/poller_perf/poller_perf.o 00:03:09.712 CC examples/util/zipf/zipf.o 00:03:09.712 CC test/env/memory/memory_ut.o 00:03:09.978 CC app/fio/nvme/fio_plugin.o 00:03:09.978 CC examples/ioat/verify/verify.o 00:03:09.978 CC test/dma/test_dma/test_dma.o 00:03:09.978 CC app/fio/bdev/fio_plugin.o 00:03:09.978 CC test/app/bdev_svc/bdev_svc.o 00:03:09.979 CC test/env/mem_callbacks/mem_callbacks.o 00:03:09.979 LINK spdk_lspci 00:03:09.979 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:10.239 LINK rpc_client_test 00:03:10.239 LINK spdk_nvme_discover 00:03:10.239 LINK jsoncat 00:03:10.239 LINK histogram_perf 00:03:10.239 LINK vtophys 00:03:10.239 LINK poller_perf 00:03:10.239 LINK zipf 00:03:10.239 CXX test/cpp_headers/crc64.o 00:03:10.239 LINK interrupt_tgt 00:03:10.239 CXX test/cpp_headers/dif.o 00:03:10.239 LINK nvmf_tgt 00:03:10.239 LINK env_dpdk_post_init 00:03:10.239 CXX test/cpp_headers/dma.o 00:03:10.239 CXX test/cpp_headers/endian.o 00:03:10.239 CXX test/cpp_headers/env_dpdk.o 00:03:10.239 CXX test/cpp_headers/env.o 00:03:10.239 CXX test/cpp_headers/event.o 00:03:10.239 CXX test/cpp_headers/fd_group.o 00:03:10.239 LINK stub 00:03:10.239 CXX test/cpp_headers/fd.o 00:03:10.239 CXX test/cpp_headers/file.o 00:03:10.239 CXX test/cpp_headers/ftl.o 00:03:10.239 CXX test/cpp_headers/gpt_spec.o 00:03:10.239 LINK spdk_trace_record 00:03:10.239 LINK iscsi_tgt 00:03:10.239 CXX test/cpp_headers/hexlify.o 00:03:10.239 CXX test/cpp_headers/histogram_data.o 00:03:10.239 CXX test/cpp_headers/idxd.o 00:03:10.239 LINK ioat_perf 00:03:10.239 CXX test/cpp_headers/idxd_spec.o 00:03:10.239 LINK spdk_tgt 00:03:10.239 LINK verify 00:03:10.239 LINK bdev_svc 00:03:10.239 CXX test/cpp_headers/init.o 00:03:10.239 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:10.502 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:10.502 CXX test/cpp_headers/ioat.o 00:03:10.502 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:10.502 CXX test/cpp_headers/ioat_spec.o 00:03:10.502 CXX test/cpp_headers/iscsi_spec.o 00:03:10.502 CXX test/cpp_headers/json.o 00:03:10.502 CXX test/cpp_headers/jsonrpc.o 00:03:10.502 LINK spdk_dd 00:03:10.502 CXX test/cpp_headers/keyring.o 00:03:10.502 CXX test/cpp_headers/keyring_module.o 00:03:10.502 LINK spdk_trace 00:03:10.502 CXX test/cpp_headers/likely.o 00:03:10.502 CXX test/cpp_headers/log.o 00:03:10.502 CXX test/cpp_headers/lvol.o 00:03:10.769 CXX test/cpp_headers/memory.o 00:03:10.769 CXX test/cpp_headers/mmio.o 00:03:10.769 CXX test/cpp_headers/nbd.o 00:03:10.769 CXX test/cpp_headers/net.o 00:03:10.769 CXX test/cpp_headers/notify.o 00:03:10.769 LINK pci_ut 00:03:10.769 CXX test/cpp_headers/nvme.o 00:03:10.769 CXX test/cpp_headers/nvme_intel.o 00:03:10.769 CXX test/cpp_headers/nvme_ocssd.o 00:03:10.769 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:10.769 CXX test/cpp_headers/nvme_spec.o 00:03:10.769 CXX test/cpp_headers/nvme_zns.o 00:03:10.769 CXX test/cpp_headers/nvmf_cmd.o 00:03:10.769 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:10.769 CXX test/cpp_headers/nvmf.o 00:03:10.769 LINK test_dma 00:03:10.769 CXX test/cpp_headers/nvmf_spec.o 00:03:10.769 CXX test/cpp_headers/nvmf_transport.o 00:03:10.769 CC test/event/event_perf/event_perf.o 00:03:10.769 LINK nvme_fuzz 00:03:10.769 CXX test/cpp_headers/opal.o 00:03:10.769 CC test/event/reactor/reactor.o 00:03:10.769 CXX test/cpp_headers/opal_spec.o 00:03:10.769 CXX test/cpp_headers/pci_ids.o 00:03:10.769 CC test/event/reactor_perf/reactor_perf.o 00:03:11.028 CC test/event/app_repeat/app_repeat.o 00:03:11.028 CC examples/sock/hello_world/hello_sock.o 00:03:11.028 LINK spdk_nvme 00:03:11.028 CXX test/cpp_headers/pipe.o 00:03:11.028 CC examples/vmd/lsvmd/lsvmd.o 00:03:11.028 CXX test/cpp_headers/queue.o 00:03:11.028 CC examples/vmd/led/led.o 00:03:11.028 CC test/event/scheduler/scheduler.o 00:03:11.028 CC examples/thread/thread/thread_ex.o 00:03:11.028 CC examples/idxd/perf/perf.o 00:03:11.028 LINK spdk_bdev 00:03:11.028 CXX test/cpp_headers/reduce.o 00:03:11.028 CXX test/cpp_headers/rpc.o 00:03:11.028 CXX test/cpp_headers/scheduler.o 00:03:11.028 CXX test/cpp_headers/scsi.o 00:03:11.028 CXX test/cpp_headers/scsi_spec.o 00:03:11.028 CXX test/cpp_headers/sock.o 00:03:11.028 CXX test/cpp_headers/stdinc.o 00:03:11.028 CXX test/cpp_headers/string.o 00:03:11.028 CXX test/cpp_headers/thread.o 00:03:11.028 CXX test/cpp_headers/trace.o 00:03:11.028 CXX test/cpp_headers/trace_parser.o 00:03:11.028 CXX test/cpp_headers/tree.o 00:03:11.028 CXX test/cpp_headers/ublk.o 00:03:11.028 CXX test/cpp_headers/util.o 00:03:11.028 CXX test/cpp_headers/uuid.o 00:03:11.028 CXX test/cpp_headers/version.o 00:03:11.028 CXX test/cpp_headers/vfio_user_pci.o 00:03:11.028 LINK reactor 00:03:11.028 CXX test/cpp_headers/vfio_user_spec.o 00:03:11.028 LINK event_perf 00:03:11.290 LINK reactor_perf 00:03:11.290 CXX test/cpp_headers/vhost.o 00:03:11.290 CXX test/cpp_headers/vmd.o 00:03:11.290 CXX test/cpp_headers/xor.o 00:03:11.290 CXX test/cpp_headers/zipf.o 00:03:11.290 LINK mem_callbacks 00:03:11.290 CC app/vhost/vhost.o 00:03:11.290 LINK lsvmd 00:03:11.290 LINK spdk_nvme_perf 00:03:11.290 LINK app_repeat 00:03:11.290 LINK led 00:03:11.290 LINK vhost_fuzz 00:03:11.290 LINK spdk_nvme_identify 00:03:11.290 LINK spdk_top 00:03:11.290 LINK scheduler 00:03:11.551 LINK hello_sock 00:03:11.551 CC test/nvme/aer/aer.o 00:03:11.551 CC test/nvme/sgl/sgl.o 00:03:11.551 LINK thread 00:03:11.551 CC test/nvme/e2edp/nvme_dp.o 00:03:11.551 CC test/nvme/reset/reset.o 00:03:11.551 CC test/nvme/err_injection/err_injection.o 00:03:11.551 CC test/nvme/startup/startup.o 00:03:11.551 CC test/nvme/overhead/overhead.o 00:03:11.551 CC test/nvme/reserve/reserve.o 00:03:11.551 CC test/nvme/simple_copy/simple_copy.o 00:03:11.551 CC test/nvme/connect_stress/connect_stress.o 00:03:11.551 CC test/accel/dif/dif.o 00:03:11.551 CC test/blobfs/mkfs/mkfs.o 00:03:11.551 CC test/nvme/boot_partition/boot_partition.o 00:03:11.551 CC test/nvme/compliance/nvme_compliance.o 00:03:11.551 CC test/nvme/fused_ordering/fused_ordering.o 00:03:11.551 CC test/nvme/cuse/cuse.o 00:03:11.551 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:11.551 CC test/nvme/fdp/fdp.o 00:03:11.551 LINK vhost 00:03:11.551 LINK idxd_perf 00:03:11.551 CC test/lvol/esnap/esnap.o 00:03:11.809 LINK boot_partition 00:03:11.809 LINK err_injection 00:03:11.809 LINK startup 00:03:11.809 LINK simple_copy 00:03:11.809 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:11.809 CC examples/nvme/hello_world/hello_world.o 00:03:11.809 CC examples/nvme/hotplug/hotplug.o 00:03:11.809 CC examples/nvme/reconnect/reconnect.o 00:03:11.809 LINK reset 00:03:11.809 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:11.809 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:11.809 CC examples/nvme/abort/abort.o 00:03:11.809 CC examples/nvme/arbitration/arbitration.o 00:03:11.809 LINK connect_stress 00:03:11.809 LINK overhead 00:03:11.809 LINK mkfs 00:03:11.809 LINK reserve 00:03:11.809 LINK doorbell_aers 00:03:11.809 LINK fused_ordering 00:03:12.067 LINK nvme_compliance 00:03:12.067 LINK sgl 00:03:12.067 LINK nvme_dp 00:03:12.067 LINK aer 00:03:12.067 LINK memory_ut 00:03:12.067 LINK fdp 00:03:12.067 CC examples/accel/perf/accel_perf.o 00:03:12.067 LINK pmr_persistence 00:03:12.067 LINK cmb_copy 00:03:12.067 CC examples/blob/hello_world/hello_blob.o 00:03:12.067 CC examples/blob/cli/blobcli.o 00:03:12.067 LINK dif 00:03:12.067 LINK hotplug 00:03:12.067 LINK hello_world 00:03:12.325 LINK reconnect 00:03:12.325 LINK abort 00:03:12.325 LINK arbitration 00:03:12.325 LINK hello_blob 00:03:12.325 LINK nvme_manage 00:03:12.582 CC test/bdev/bdevio/bdevio.o 00:03:12.582 LINK accel_perf 00:03:12.582 LINK blobcli 00:03:12.582 LINK iscsi_fuzz 00:03:12.840 CC examples/bdev/hello_world/hello_bdev.o 00:03:12.840 CC examples/bdev/bdevperf/bdevperf.o 00:03:12.840 LINK bdevio 00:03:13.098 LINK cuse 00:03:13.098 LINK hello_bdev 00:03:13.664 LINK bdevperf 00:03:13.922 CC examples/nvmf/nvmf/nvmf.o 00:03:14.489 LINK nvmf 00:03:17.020 LINK esnap 00:03:17.020 00:03:17.020 real 0m41.248s 00:03:17.020 user 7m26.158s 00:03:17.020 sys 1m48.295s 00:03:17.020 18:03:42 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:17.020 18:03:42 make -- common/autotest_common.sh@10 -- $ set +x 00:03:17.020 ************************************ 00:03:17.020 END TEST make 00:03:17.020 ************************************ 00:03:17.020 18:03:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:17.020 18:03:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:17.020 18:03:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:17.020 18:03:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.020 18:03:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:17.020 18:03:42 -- pm/common@44 -- $ pid=1231167 00:03:17.020 18:03:42 -- pm/common@50 -- $ kill -TERM 1231167 00:03:17.020 18:03:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.020 18:03:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:17.020 18:03:42 -- pm/common@44 -- $ pid=1231169 00:03:17.020 18:03:42 -- pm/common@50 -- $ kill -TERM 1231169 00:03:17.020 18:03:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.020 18:03:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:17.020 18:03:42 -- pm/common@44 -- $ pid=1231171 00:03:17.020 18:03:42 -- pm/common@50 -- $ kill -TERM 1231171 00:03:17.020 18:03:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.020 18:03:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:17.020 18:03:42 -- pm/common@44 -- $ pid=1231197 00:03:17.020 18:03:42 -- pm/common@50 -- $ sudo -E kill -TERM 1231197 00:03:17.020 18:03:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:17.020 18:03:43 -- nvmf/common.sh@7 -- # uname -s 00:03:17.020 18:03:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:17.020 18:03:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:17.020 18:03:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:17.020 18:03:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:17.020 18:03:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:17.020 18:03:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:17.020 18:03:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:17.020 18:03:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:17.020 18:03:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:17.020 18:03:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:17.020 18:03:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:17.020 18:03:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:17.020 18:03:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:17.020 18:03:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:17.020 18:03:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:17.020 18:03:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:17.020 18:03:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:17.020 18:03:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:17.020 18:03:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:17.020 18:03:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:17.020 18:03:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.021 18:03:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.021 18:03:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.021 18:03:43 -- paths/export.sh@5 -- # export PATH 00:03:17.021 18:03:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.021 18:03:43 -- nvmf/common.sh@47 -- # : 0 00:03:17.021 18:03:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:17.021 18:03:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:17.021 18:03:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:17.021 18:03:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:17.021 18:03:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:17.021 18:03:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:17.021 18:03:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:17.021 18:03:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:17.021 18:03:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:17.021 18:03:43 -- spdk/autotest.sh@32 -- # uname -s 00:03:17.021 18:03:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:17.021 18:03:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:17.021 18:03:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:17.021 18:03:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:17.021 18:03:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:17.021 18:03:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:17.021 18:03:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:17.021 18:03:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:17.021 18:03:43 -- spdk/autotest.sh@48 -- # udevadm_pid=1303037 00:03:17.021 18:03:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:17.021 18:03:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:17.021 18:03:43 -- pm/common@17 -- # local monitor 00:03:17.021 18:03:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.021 18:03:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.021 18:03:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.021 18:03:43 -- pm/common@21 -- # date +%s 00:03:17.021 18:03:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.021 18:03:43 -- pm/common@21 -- # date +%s 00:03:17.021 18:03:43 -- pm/common@25 -- # sleep 1 00:03:17.021 18:03:43 -- pm/common@21 -- # date +%s 00:03:17.021 18:03:43 -- pm/common@21 -- # date +%s 00:03:17.021 18:03:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722009823 00:03:17.021 18:03:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722009823 00:03:17.021 18:03:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722009823 00:03:17.021 18:03:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722009823 00:03:17.021 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722009823_collect-vmstat.pm.log 00:03:17.021 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722009823_collect-cpu-load.pm.log 00:03:17.021 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722009823_collect-cpu-temp.pm.log 00:03:17.021 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722009823_collect-bmc-pm.bmc.pm.log 00:03:17.958 18:03:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:17.958 18:03:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:17.958 18:03:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:17.958 18:03:44 -- common/autotest_common.sh@10 -- # set +x 00:03:17.958 18:03:44 -- spdk/autotest.sh@59 -- # create_test_list 00:03:17.958 18:03:44 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:17.958 18:03:44 -- common/autotest_common.sh@10 -- # set +x 00:03:18.216 18:03:44 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:18.216 18:03:44 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:18.216 18:03:44 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:18.216 18:03:44 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:18.216 18:03:44 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:18.216 18:03:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:18.216 18:03:44 -- common/autotest_common.sh@1455 -- # uname 00:03:18.216 18:03:44 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:18.216 18:03:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:18.216 18:03:44 -- common/autotest_common.sh@1475 -- # uname 00:03:18.216 18:03:44 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:18.216 18:03:44 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:18.216 18:03:44 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:18.216 18:03:44 -- spdk/autotest.sh@72 -- # hash lcov 00:03:18.216 18:03:44 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:18.217 18:03:44 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:18.217 --rc lcov_branch_coverage=1 00:03:18.217 --rc lcov_function_coverage=1 00:03:18.217 --rc genhtml_branch_coverage=1 00:03:18.217 --rc genhtml_function_coverage=1 00:03:18.217 --rc genhtml_legend=1 00:03:18.217 --rc geninfo_all_blocks=1 00:03:18.217 ' 00:03:18.217 18:03:44 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:18.217 --rc lcov_branch_coverage=1 00:03:18.217 --rc lcov_function_coverage=1 00:03:18.217 --rc genhtml_branch_coverage=1 00:03:18.217 --rc genhtml_function_coverage=1 00:03:18.217 --rc genhtml_legend=1 00:03:18.217 --rc geninfo_all_blocks=1 00:03:18.217 ' 00:03:18.217 18:03:44 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:18.217 --rc lcov_branch_coverage=1 00:03:18.217 --rc lcov_function_coverage=1 00:03:18.217 --rc genhtml_branch_coverage=1 00:03:18.217 --rc genhtml_function_coverage=1 00:03:18.217 --rc genhtml_legend=1 00:03:18.217 --rc geninfo_all_blocks=1 00:03:18.217 --no-external' 00:03:18.217 18:03:44 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:18.217 --rc lcov_branch_coverage=1 00:03:18.217 --rc lcov_function_coverage=1 00:03:18.217 --rc genhtml_branch_coverage=1 00:03:18.217 --rc genhtml_function_coverage=1 00:03:18.217 --rc genhtml_legend=1 00:03:18.217 --rc geninfo_all_blocks=1 00:03:18.217 --no-external' 00:03:18.217 18:03:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:18.217 lcov: LCOV version 1.14 00:03:18.217 18:03:44 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:36.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:36.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:48.536 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:48.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:48.537 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:48.538 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:52.720 18:04:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:52.720 18:04:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:52.720 18:04:18 -- common/autotest_common.sh@10 -- # set +x 00:03:52.720 18:04:18 -- spdk/autotest.sh@91 -- # rm -f 00:03:52.720 18:04:18 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.286 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:53.545 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:53.545 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:53.545 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:53.545 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:53.545 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:53.545 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:53.545 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:53.545 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:53.545 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:53.545 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:53.545 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:53.545 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:53.545 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:53.545 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:53.545 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:53.545 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:53.804 18:04:19 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:53.804 18:04:19 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:53.804 18:04:19 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:53.804 18:04:19 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:53.804 18:04:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.804 18:04:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:53.804 18:04:19 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:53.804 18:04:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.804 18:04:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.804 18:04:19 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:53.804 18:04:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.804 18:04:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:53.804 18:04:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:53.804 18:04:19 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:53.804 18:04:19 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:53.804 No valid GPT data, bailing 00:03:53.804 18:04:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.804 18:04:19 -- scripts/common.sh@391 -- # pt= 00:03:53.804 18:04:19 -- scripts/common.sh@392 -- # return 1 00:03:53.804 18:04:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:53.804 1+0 records in 00:03:53.804 1+0 records out 00:03:53.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0021428 s, 489 MB/s 00:03:53.804 18:04:19 -- spdk/autotest.sh@118 -- # sync 00:03:53.804 18:04:19 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:53.804 18:04:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:53.804 18:04:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:55.715 18:04:21 -- spdk/autotest.sh@124 -- # uname -s 00:03:55.715 18:04:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:55.715 18:04:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:55.716 18:04:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.716 18:04:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.716 18:04:21 -- common/autotest_common.sh@10 -- # set +x 00:03:55.716 ************************************ 00:03:55.716 START TEST setup.sh 00:03:55.716 ************************************ 00:03:55.716 18:04:21 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:55.716 * Looking for test storage... 00:03:55.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:55.716 18:04:21 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:55.716 18:04:21 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:55.716 18:04:21 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:55.716 18:04:21 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.716 18:04:21 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.716 18:04:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.716 ************************************ 00:03:55.716 START TEST acl 00:03:55.716 ************************************ 00:03:55.716 18:04:21 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:55.716 * Looking for test storage... 00:03:55.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:55.716 18:04:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:55.716 18:04:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:55.716 18:04:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:55.716 18:04:21 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:55.716 18:04:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:55.716 18:04:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:55.716 18:04:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:55.716 18:04:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.716 18:04:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:55.716 18:04:21 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:55.716 18:04:21 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:55.716 18:04:21 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:55.716 18:04:21 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:55.716 18:04:21 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:55.716 18:04:21 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.716 18:04:21 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.090 18:04:23 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:57.090 18:04:23 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:57.090 18:04:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.090 18:04:23 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:57.090 18:04:23 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.090 18:04:23 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:58.461 Hugepages 00:03:58.461 node hugesize free / total 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.461 00:03:58.461 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.461 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:58.462 18:04:24 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:58.462 18:04:24 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.462 18:04:24 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.462 18:04:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.462 ************************************ 00:03:58.462 START TEST denied 00:03:58.462 ************************************ 00:03:58.462 18:04:24 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:58.462 18:04:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:58.462 18:04:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:58.462 18:04:24 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:58.462 18:04:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.462 18:04:24 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:59.834 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:59.834 18:04:25 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:59.834 18:04:25 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:59.834 18:04:25 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:59.834 18:04:25 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:59.834 18:04:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:59.834 18:04:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:59.834 18:04:25 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:59.834 18:04:25 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:59.834 18:04:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.834 18:04:25 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.395 00:04:02.395 real 0m3.811s 00:04:02.395 user 0m1.127s 00:04:02.395 sys 0m1.770s 00:04:02.395 18:04:28 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.395 18:04:28 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:02.395 ************************************ 00:04:02.395 END TEST denied 00:04:02.395 ************************************ 00:04:02.395 18:04:28 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:02.395 18:04:28 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.395 18:04:28 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.395 18:04:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:02.395 ************************************ 00:04:02.395 START TEST allowed 00:04:02.395 ************************************ 00:04:02.395 18:04:28 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:02.395 18:04:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:02.395 18:04:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:02.395 18:04:28 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:02.395 18:04:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.395 18:04:28 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:04.930 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:04.930 18:04:30 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:04.930 18:04:30 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:04.930 18:04:30 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:04.930 18:04:30 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.930 18:04:30 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.309 00:04:06.309 real 0m4.001s 00:04:06.309 user 0m1.027s 00:04:06.309 sys 0m1.806s 00:04:06.309 18:04:32 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.309 18:04:32 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:06.309 ************************************ 00:04:06.309 END TEST allowed 00:04:06.309 ************************************ 00:04:06.309 00:04:06.309 real 0m10.577s 00:04:06.309 user 0m3.236s 00:04:06.309 sys 0m5.337s 00:04:06.309 18:04:32 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.309 18:04:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:06.309 ************************************ 00:04:06.309 END TEST acl 00:04:06.309 ************************************ 00:04:06.309 18:04:32 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:06.309 18:04:32 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.309 18:04:32 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.309 18:04:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.309 ************************************ 00:04:06.309 START TEST hugepages 00:04:06.309 ************************************ 00:04:06.309 18:04:32 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:06.309 * Looking for test storage... 00:04:06.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.309 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42241512 kB' 'MemAvailable: 45749236 kB' 'Buffers: 2704 kB' 'Cached: 11733816 kB' 'SwapCached: 0 kB' 'Active: 8736424 kB' 'Inactive: 3506192 kB' 'Active(anon): 8340928 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509612 kB' 'Mapped: 172860 kB' 'Shmem: 7834832 kB' 'KReclaimable: 199752 kB' 'Slab: 575272 kB' 'SReclaimable: 199752 kB' 'SUnreclaim: 375520 kB' 'KernelStack: 12864 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 9423404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.310 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.311 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:06.312 18:04:32 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:06.312 18:04:32 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.312 18:04:32 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.312 18:04:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.571 ************************************ 00:04:06.571 START TEST default_setup 00:04:06.571 ************************************ 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.571 18:04:32 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.508 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:07.508 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:07.508 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:07.508 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:07.508 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:07.508 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:07.508 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:07.508 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:07.508 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:07.508 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:07.508 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:07.508 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:07.508 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:07.508 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:07.767 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:07.767 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:08.710 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44326104 kB' 'MemAvailable: 47833800 kB' 'Buffers: 2704 kB' 'Cached: 11733912 kB' 'SwapCached: 0 kB' 'Active: 8754004 kB' 'Inactive: 3506192 kB' 'Active(anon): 8358508 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526828 kB' 'Mapped: 172640 kB' 'Shmem: 7834928 kB' 'KReclaimable: 199696 kB' 'Slab: 575276 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375580 kB' 'KernelStack: 12768 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9444552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.710 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.711 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44332408 kB' 'MemAvailable: 47840104 kB' 'Buffers: 2704 kB' 'Cached: 11733916 kB' 'SwapCached: 0 kB' 'Active: 8753924 kB' 'Inactive: 3506192 kB' 'Active(anon): 8358428 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526804 kB' 'Mapped: 172128 kB' 'Shmem: 7834932 kB' 'KReclaimable: 199696 kB' 'Slab: 575276 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375580 kB' 'KernelStack: 12848 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9444572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.712 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.713 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44332600 kB' 'MemAvailable: 47840296 kB' 'Buffers: 2704 kB' 'Cached: 11733932 kB' 'SwapCached: 0 kB' 'Active: 8753768 kB' 'Inactive: 3506192 kB' 'Active(anon): 8358272 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526620 kB' 'Mapped: 172052 kB' 'Shmem: 7834948 kB' 'KReclaimable: 199696 kB' 'Slab: 575260 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375564 kB' 'KernelStack: 12832 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9444592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.714 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.715 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.716 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.717 nr_hugepages=1024 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.717 resv_hugepages=0 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.717 surplus_hugepages=0 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.717 anon_hugepages=0 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44332600 kB' 'MemAvailable: 47840296 kB' 'Buffers: 2704 kB' 'Cached: 11733956 kB' 'SwapCached: 0 kB' 'Active: 8753740 kB' 'Inactive: 3506192 kB' 'Active(anon): 8358244 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526584 kB' 'Mapped: 172052 kB' 'Shmem: 7834972 kB' 'KReclaimable: 199696 kB' 'Slab: 575260 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375564 kB' 'KernelStack: 12816 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9444616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.717 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19934660 kB' 'MemUsed: 12942280 kB' 'SwapCached: 0 kB' 'Active: 6509836 kB' 'Inactive: 3357228 kB' 'Active(anon): 6237904 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9739384 kB' 'Mapped: 98180 kB' 'AnonPages: 130892 kB' 'Shmem: 6110224 kB' 'KernelStack: 6776 kB' 'PageTables: 3148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102020 kB' 'Slab: 316092 kB' 'SReclaimable: 102020 kB' 'SUnreclaim: 214072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.719 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.720 node0=1024 expecting 1024 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.720 00:04:08.720 real 0m2.338s 00:04:08.720 user 0m0.611s 00:04:08.720 sys 0m0.847s 00:04:08.720 18:04:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.721 18:04:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:08.721 ************************************ 00:04:08.721 END TEST default_setup 00:04:08.721 ************************************ 00:04:08.721 18:04:34 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:08.721 18:04:34 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.721 18:04:34 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.721 18:04:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.979 ************************************ 00:04:08.979 START TEST per_node_1G_alloc 00:04:08.979 ************************************ 00:04:08.979 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:08.979 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:08.979 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:08.979 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:08.979 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:08.979 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:08.979 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:08.979 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:08.979 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.979 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:08.979 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:08.979 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.980 18:04:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.917 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.917 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:09.917 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:09.917 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:09.917 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:09.917 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:09.917 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:09.917 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:09.917 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:09.917 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:09.917 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:09.917 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:09.917 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:09.917 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:09.917 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:09.917 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:09.917 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44327548 kB' 'MemAvailable: 47835244 kB' 'Buffers: 2704 kB' 'Cached: 11734028 kB' 'SwapCached: 0 kB' 'Active: 8753860 kB' 'Inactive: 3506192 kB' 'Active(anon): 8358364 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526528 kB' 'Mapped: 172188 kB' 'Shmem: 7835044 kB' 'KReclaimable: 199696 kB' 'Slab: 575332 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375636 kB' 'KernelStack: 12800 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9444668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.182 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.183 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44330880 kB' 'MemAvailable: 47838576 kB' 'Buffers: 2704 kB' 'Cached: 11734032 kB' 'SwapCached: 0 kB' 'Active: 8754148 kB' 'Inactive: 3506192 kB' 'Active(anon): 8358652 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526868 kB' 'Mapped: 172140 kB' 'Shmem: 7835048 kB' 'KReclaimable: 199696 kB' 'Slab: 575316 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375620 kB' 'KernelStack: 12832 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9444688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.184 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.185 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44331856 kB' 'MemAvailable: 47839552 kB' 'Buffers: 2704 kB' 'Cached: 11734048 kB' 'SwapCached: 0 kB' 'Active: 8754032 kB' 'Inactive: 3506192 kB' 'Active(anon): 8358536 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526696 kB' 'Mapped: 172064 kB' 'Shmem: 7835064 kB' 'KReclaimable: 199696 kB' 'Slab: 575312 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375616 kB' 'KernelStack: 12832 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9444708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.186 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.187 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.188 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.189 nr_hugepages=1024 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.189 resv_hugepages=0 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.189 surplus_hugepages=0 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.189 anon_hugepages=0 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44332264 kB' 'MemAvailable: 47839960 kB' 'Buffers: 2704 kB' 'Cached: 11734068 kB' 'SwapCached: 0 kB' 'Active: 8753844 kB' 'Inactive: 3506192 kB' 'Active(anon): 8358348 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526468 kB' 'Mapped: 172064 kB' 'Shmem: 7835084 kB' 'KReclaimable: 199696 kB' 'Slab: 575312 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375616 kB' 'KernelStack: 12816 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9444732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.189 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.190 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20979240 kB' 'MemUsed: 11897700 kB' 'SwapCached: 0 kB' 'Active: 6509904 kB' 'Inactive: 3357228 kB' 'Active(anon): 6237972 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9739392 kB' 'Mapped: 98192 kB' 'AnonPages: 130860 kB' 'Shmem: 6110232 kB' 'KernelStack: 6760 kB' 'PageTables: 3140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102020 kB' 'Slab: 316196 kB' 'SReclaimable: 102020 kB' 'SUnreclaim: 214176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.191 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23352140 kB' 'MemUsed: 4312632 kB' 'SwapCached: 0 kB' 'Active: 2244120 kB' 'Inactive: 148964 kB' 'Active(anon): 2120556 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1997424 kB' 'Mapped: 73872 kB' 'AnonPages: 395796 kB' 'Shmem: 1724896 kB' 'KernelStack: 6056 kB' 'PageTables: 4668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97676 kB' 'Slab: 259116 kB' 'SReclaimable: 97676 kB' 'SUnreclaim: 161440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.194 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.195 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.195 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.195 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:10.195 node0=512 expecting 512 00:04:10.195 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.195 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.195 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.195 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:10.195 node1=512 expecting 512 00:04:10.195 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:10.195 00:04:10.195 real 0m1.402s 00:04:10.195 user 0m0.562s 00:04:10.195 sys 0m0.798s 00:04:10.195 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.195 18:04:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.195 ************************************ 00:04:10.195 END TEST per_node_1G_alloc 00:04:10.195 ************************************ 00:04:10.195 18:04:36 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:10.195 18:04:36 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.195 18:04:36 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.195 18:04:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.195 ************************************ 00:04:10.195 START TEST even_2G_alloc 00:04:10.195 ************************************ 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.195 18:04:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:11.577 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:11.577 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:11.577 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:11.578 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:11.578 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:11.578 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:11.578 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:11.578 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:11.578 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:11.578 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:11.578 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:11.578 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:11.578 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:11.578 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:11.578 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:11.578 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:11.578 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44328924 kB' 'MemAvailable: 47836620 kB' 'Buffers: 2704 kB' 'Cached: 11734160 kB' 'SwapCached: 0 kB' 'Active: 8754508 kB' 'Inactive: 3506192 kB' 'Active(anon): 8359012 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526952 kB' 'Mapped: 172096 kB' 'Shmem: 7835176 kB' 'KReclaimable: 199696 kB' 'Slab: 575092 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375396 kB' 'KernelStack: 12784 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9445088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.578 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.579 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44328252 kB' 'MemAvailable: 47835948 kB' 'Buffers: 2704 kB' 'Cached: 11734164 kB' 'SwapCached: 0 kB' 'Active: 8754356 kB' 'Inactive: 3506192 kB' 'Active(anon): 8358860 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526796 kB' 'Mapped: 172076 kB' 'Shmem: 7835180 kB' 'KReclaimable: 199696 kB' 'Slab: 575096 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375400 kB' 'KernelStack: 12816 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9445108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.580 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.581 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.582 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44328252 kB' 'MemAvailable: 47835948 kB' 'Buffers: 2704 kB' 'Cached: 11734180 kB' 'SwapCached: 0 kB' 'Active: 8754300 kB' 'Inactive: 3506192 kB' 'Active(anon): 8358804 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526804 kB' 'Mapped: 172076 kB' 'Shmem: 7835196 kB' 'KReclaimable: 199696 kB' 'Slab: 575164 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375468 kB' 'KernelStack: 12832 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9445128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.583 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.584 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.585 nr_hugepages=1024 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.585 resv_hugepages=0 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.585 surplus_hugepages=0 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.585 anon_hugepages=0 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44328252 kB' 'MemAvailable: 47835948 kB' 'Buffers: 2704 kB' 'Cached: 11734204 kB' 'SwapCached: 0 kB' 'Active: 8754380 kB' 'Inactive: 3506192 kB' 'Active(anon): 8358884 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526808 kB' 'Mapped: 172076 kB' 'Shmem: 7835220 kB' 'KReclaimable: 199696 kB' 'Slab: 575164 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375468 kB' 'KernelStack: 12832 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9445152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.585 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.586 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20979556 kB' 'MemUsed: 11897384 kB' 'SwapCached: 0 kB' 'Active: 6510168 kB' 'Inactive: 3357228 kB' 'Active(anon): 6238236 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9739396 kB' 'Mapped: 98204 kB' 'AnonPages: 131072 kB' 'Shmem: 6110236 kB' 'KernelStack: 6760 kB' 'PageTables: 3104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102020 kB' 'Slab: 315936 kB' 'SReclaimable: 102020 kB' 'SUnreclaim: 213916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.587 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.588 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23348444 kB' 'MemUsed: 4316328 kB' 'SwapCached: 0 kB' 'Active: 2244240 kB' 'Inactive: 148964 kB' 'Active(anon): 2120676 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1997552 kB' 'Mapped: 73872 kB' 'AnonPages: 395732 kB' 'Shmem: 1725024 kB' 'KernelStack: 6072 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97676 kB' 'Slab: 259228 kB' 'SReclaimable: 97676 kB' 'SUnreclaim: 161552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.589 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.590 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:11.591 node0=512 expecting 512 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:11.591 node1=512 expecting 512 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:11.591 00:04:11.591 real 0m1.397s 00:04:11.591 user 0m0.617s 00:04:11.591 sys 0m0.740s 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:11.591 18:04:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:11.591 ************************************ 00:04:11.591 END TEST even_2G_alloc 00:04:11.591 ************************************ 00:04:11.850 18:04:37 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:11.850 18:04:37 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.850 18:04:37 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.850 18:04:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.850 ************************************ 00:04:11.850 START TEST odd_alloc 00:04:11.850 ************************************ 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.850 18:04:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.790 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:12.791 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.791 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:12.791 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:12.791 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:12.791 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:12.791 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:12.791 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:12.791 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:12.791 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:12.791 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:12.791 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:12.791 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:12.791 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:12.791 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:12.791 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:12.791 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:13.055 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:13.055 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.055 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.055 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.055 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.055 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.055 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.055 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44318896 kB' 'MemAvailable: 47826592 kB' 'Buffers: 2704 kB' 'Cached: 11734300 kB' 'SwapCached: 0 kB' 'Active: 8752072 kB' 'Inactive: 3506192 kB' 'Active(anon): 8356576 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524532 kB' 'Mapped: 171340 kB' 'Shmem: 7835316 kB' 'KReclaimable: 199696 kB' 'Slab: 575136 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375440 kB' 'KernelStack: 12832 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9431524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.056 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44317640 kB' 'MemAvailable: 47825336 kB' 'Buffers: 2704 kB' 'Cached: 11734304 kB' 'SwapCached: 0 kB' 'Active: 8751524 kB' 'Inactive: 3506192 kB' 'Active(anon): 8356028 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523948 kB' 'Mapped: 171320 kB' 'Shmem: 7835320 kB' 'KReclaimable: 199696 kB' 'Slab: 575136 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375440 kB' 'KernelStack: 12880 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9432300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.057 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.058 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.059 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44319248 kB' 'MemAvailable: 47826944 kB' 'Buffers: 2704 kB' 'Cached: 11734316 kB' 'SwapCached: 0 kB' 'Active: 8752428 kB' 'Inactive: 3506192 kB' 'Active(anon): 8356932 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524800 kB' 'Mapped: 171244 kB' 'Shmem: 7835332 kB' 'KReclaimable: 199696 kB' 'Slab: 575124 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375428 kB' 'KernelStack: 13120 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9432556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196368 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.060 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.061 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:13.062 nr_hugepages=1025 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.062 resv_hugepages=0 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.062 surplus_hugepages=0 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.062 anon_hugepages=0 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44319332 kB' 'MemAvailable: 47827028 kB' 'Buffers: 2704 kB' 'Cached: 11734316 kB' 'SwapCached: 0 kB' 'Active: 8752572 kB' 'Inactive: 3506192 kB' 'Active(anon): 8357076 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524884 kB' 'Mapped: 171244 kB' 'Shmem: 7835332 kB' 'KReclaimable: 199696 kB' 'Slab: 575124 kB' 'SReclaimable: 199696 kB' 'SUnreclaim: 375428 kB' 'KernelStack: 13136 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9432576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.062 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.063 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20961592 kB' 'MemUsed: 11915348 kB' 'SwapCached: 0 kB' 'Active: 6511180 kB' 'Inactive: 3357228 kB' 'Active(anon): 6239248 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9739412 kB' 'Mapped: 98052 kB' 'AnonPages: 132080 kB' 'Shmem: 6110252 kB' 'KernelStack: 7192 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102020 kB' 'Slab: 315948 kB' 'SReclaimable: 102020 kB' 'SUnreclaim: 213928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.064 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23348300 kB' 'MemUsed: 4316472 kB' 'SwapCached: 0 kB' 'Active: 2247448 kB' 'Inactive: 148964 kB' 'Active(anon): 2123884 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1997660 kB' 'Mapped: 73628 kB' 'AnonPages: 398900 kB' 'Shmem: 1725132 kB' 'KernelStack: 6040 kB' 'PageTables: 4592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97676 kB' 'Slab: 259168 kB' 'SReclaimable: 97676 kB' 'SUnreclaim: 161492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.065 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.066 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:13.067 node0=512 expecting 513 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:13.067 node1=513 expecting 512 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:13.067 00:04:13.067 real 0m1.415s 00:04:13.067 user 0m0.604s 00:04:13.067 sys 0m0.759s 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.067 18:04:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.067 ************************************ 00:04:13.067 END TEST odd_alloc 00:04:13.067 ************************************ 00:04:13.067 18:04:39 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:13.067 18:04:39 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.067 18:04:39 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.067 18:04:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.327 ************************************ 00:04:13.327 START TEST custom_alloc 00:04:13.327 ************************************ 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:13.327 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.328 18:04:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:14.269 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:14.269 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:14.269 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:14.269 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:14.269 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:14.269 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:14.269 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:14.269 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:14.269 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:14.269 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:14.269 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:14.269 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:14.269 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:14.269 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:14.269 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:14.269 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:14.269 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.535 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43273492 kB' 'MemAvailable: 46781200 kB' 'Buffers: 2704 kB' 'Cached: 11734428 kB' 'SwapCached: 0 kB' 'Active: 8751212 kB' 'Inactive: 3506192 kB' 'Active(anon): 8355716 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523520 kB' 'Mapped: 171408 kB' 'Shmem: 7835444 kB' 'KReclaimable: 199720 kB' 'Slab: 575088 kB' 'SReclaimable: 199720 kB' 'SUnreclaim: 375368 kB' 'KernelStack: 12832 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9430416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.536 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43273924 kB' 'MemAvailable: 46781632 kB' 'Buffers: 2704 kB' 'Cached: 11734432 kB' 'SwapCached: 0 kB' 'Active: 8751032 kB' 'Inactive: 3506192 kB' 'Active(anon): 8355536 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523332 kB' 'Mapped: 171348 kB' 'Shmem: 7835448 kB' 'KReclaimable: 199720 kB' 'Slab: 575092 kB' 'SReclaimable: 199720 kB' 'SUnreclaim: 375372 kB' 'KernelStack: 12800 kB' 'PageTables: 7440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9430436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.537 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.538 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.539 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43273672 kB' 'MemAvailable: 46781380 kB' 'Buffers: 2704 kB' 'Cached: 11734444 kB' 'SwapCached: 0 kB' 'Active: 8751264 kB' 'Inactive: 3506192 kB' 'Active(anon): 8355768 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523520 kB' 'Mapped: 171272 kB' 'Shmem: 7835460 kB' 'KReclaimable: 199720 kB' 'Slab: 575084 kB' 'SReclaimable: 199720 kB' 'SUnreclaim: 375364 kB' 'KernelStack: 12832 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9430456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.540 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.541 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:14.542 nr_hugepages=1536 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.542 resv_hugepages=0 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.542 surplus_hugepages=0 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.542 anon_hugepages=0 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43273420 kB' 'MemAvailable: 46781128 kB' 'Buffers: 2704 kB' 'Cached: 11734468 kB' 'SwapCached: 0 kB' 'Active: 8751000 kB' 'Inactive: 3506192 kB' 'Active(anon): 8355504 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523208 kB' 'Mapped: 171272 kB' 'Shmem: 7835484 kB' 'KReclaimable: 199720 kB' 'Slab: 575076 kB' 'SReclaimable: 199720 kB' 'SUnreclaim: 375356 kB' 'KernelStack: 12816 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9430476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.542 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20966972 kB' 'MemUsed: 11909968 kB' 'SwapCached: 0 kB' 'Active: 6510324 kB' 'Inactive: 3357228 kB' 'Active(anon): 6238392 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9739476 kB' 'Mapped: 97632 kB' 'AnonPages: 131264 kB' 'Shmem: 6110316 kB' 'KernelStack: 6824 kB' 'PageTables: 3092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102020 kB' 'Slab: 315868 kB' 'SReclaimable: 102020 kB' 'SUnreclaim: 213848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.545 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22306392 kB' 'MemUsed: 5358380 kB' 'SwapCached: 0 kB' 'Active: 2240804 kB' 'Inactive: 148964 kB' 'Active(anon): 2117240 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1997700 kB' 'Mapped: 73640 kB' 'AnonPages: 392068 kB' 'Shmem: 1725172 kB' 'KernelStack: 5992 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97700 kB' 'Slab: 259208 kB' 'SReclaimable: 97700 kB' 'SUnreclaim: 161508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:14.547 node0=512 expecting 512 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:14.547 node1=1024 expecting 1024 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:14.547 00:04:14.547 real 0m1.467s 00:04:14.547 user 0m0.582s 00:04:14.547 sys 0m0.848s 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.547 18:04:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.547 ************************************ 00:04:14.547 END TEST custom_alloc 00:04:14.547 ************************************ 00:04:14.806 18:04:40 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:14.806 18:04:40 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.806 18:04:40 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.806 18:04:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.806 ************************************ 00:04:14.806 START TEST no_shrink_alloc 00:04:14.806 ************************************ 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.806 18:04:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:15.745 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:15.745 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:15.745 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:15.745 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:15.745 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:15.746 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:15.746 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:15.746 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:15.746 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:15.746 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:15.746 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:15.746 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:15.746 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:15.746 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:15.746 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:15.746 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:15.746 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44315408 kB' 'MemAvailable: 47823112 kB' 'Buffers: 2704 kB' 'Cached: 11734552 kB' 'SwapCached: 0 kB' 'Active: 8751896 kB' 'Inactive: 3506192 kB' 'Active(anon): 8356400 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524024 kB' 'Mapped: 171348 kB' 'Shmem: 7835568 kB' 'KReclaimable: 199712 kB' 'Slab: 574836 kB' 'SReclaimable: 199712 kB' 'SUnreclaim: 375124 kB' 'KernelStack: 12832 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9430748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.010 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.011 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44315616 kB' 'MemAvailable: 47823320 kB' 'Buffers: 2704 kB' 'Cached: 11734552 kB' 'SwapCached: 0 kB' 'Active: 8751528 kB' 'Inactive: 3506192 kB' 'Active(anon): 8356032 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523672 kB' 'Mapped: 171344 kB' 'Shmem: 7835568 kB' 'KReclaimable: 199712 kB' 'Slab: 574836 kB' 'SReclaimable: 199712 kB' 'SUnreclaim: 375124 kB' 'KernelStack: 12816 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9430764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.012 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.013 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44315904 kB' 'MemAvailable: 47823608 kB' 'Buffers: 2704 kB' 'Cached: 11734572 kB' 'SwapCached: 0 kB' 'Active: 8751440 kB' 'Inactive: 3506192 kB' 'Active(anon): 8355944 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523556 kB' 'Mapped: 171268 kB' 'Shmem: 7835588 kB' 'KReclaimable: 199712 kB' 'Slab: 574836 kB' 'SReclaimable: 199712 kB' 'SUnreclaim: 375124 kB' 'KernelStack: 12848 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9430788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.014 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.014 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.014 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.014 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.015 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.016 nr_hugepages=1024 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.016 resv_hugepages=0 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.016 surplus_hugepages=0 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.016 anon_hugepages=0 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44316232 kB' 'MemAvailable: 47823936 kB' 'Buffers: 2704 kB' 'Cached: 11734592 kB' 'SwapCached: 0 kB' 'Active: 8751460 kB' 'Inactive: 3506192 kB' 'Active(anon): 8355964 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523556 kB' 'Mapped: 171268 kB' 'Shmem: 7835608 kB' 'KReclaimable: 199712 kB' 'Slab: 574836 kB' 'SReclaimable: 199712 kB' 'SUnreclaim: 375124 kB' 'KernelStack: 12848 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9430808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.016 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.017 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.018 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19905796 kB' 'MemUsed: 12971144 kB' 'SwapCached: 0 kB' 'Active: 6510628 kB' 'Inactive: 3357228 kB' 'Active(anon): 6238696 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9739548 kB' 'Mapped: 97640 kB' 'AnonPages: 131448 kB' 'Shmem: 6110388 kB' 'KernelStack: 6856 kB' 'PageTables: 3136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102020 kB' 'Slab: 315724 kB' 'SReclaimable: 102020 kB' 'SUnreclaim: 213704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.019 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.020 node0=1024 expecting 1024 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.020 18:04:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:17.401 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:17.401 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:17.401 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:17.401 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:17.401 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:17.401 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:17.401 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:17.401 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:17.401 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:17.401 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:17.401 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:17.401 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:17.401 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:17.401 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:17.401 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:17.401 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:17.401 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:17.401 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44325140 kB' 'MemAvailable: 47832840 kB' 'Buffers: 2704 kB' 'Cached: 11734668 kB' 'SwapCached: 0 kB' 'Active: 8752152 kB' 'Inactive: 3506192 kB' 'Active(anon): 8356656 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524292 kB' 'Mapped: 171468 kB' 'Shmem: 7835684 kB' 'KReclaimable: 199704 kB' 'Slab: 574712 kB' 'SReclaimable: 199704 kB' 'SUnreclaim: 375008 kB' 'KernelStack: 12880 kB' 'PageTables: 7552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9430996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.401 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.402 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44329956 kB' 'MemAvailable: 47837656 kB' 'Buffers: 2704 kB' 'Cached: 11734672 kB' 'SwapCached: 0 kB' 'Active: 8751696 kB' 'Inactive: 3506192 kB' 'Active(anon): 8356200 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523764 kB' 'Mapped: 171388 kB' 'Shmem: 7835688 kB' 'KReclaimable: 199704 kB' 'Slab: 574744 kB' 'SReclaimable: 199704 kB' 'SUnreclaim: 375040 kB' 'KernelStack: 12848 kB' 'PageTables: 7440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9431012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.403 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.404 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44329908 kB' 'MemAvailable: 47837608 kB' 'Buffers: 2704 kB' 'Cached: 11734692 kB' 'SwapCached: 0 kB' 'Active: 8751636 kB' 'Inactive: 3506192 kB' 'Active(anon): 8356140 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523680 kB' 'Mapped: 171312 kB' 'Shmem: 7835708 kB' 'KReclaimable: 199704 kB' 'Slab: 574772 kB' 'SReclaimable: 199704 kB' 'SUnreclaim: 375068 kB' 'KernelStack: 12912 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9431036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.405 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.406 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.407 nr_hugepages=1024 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.407 resv_hugepages=0 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.407 surplus_hugepages=0 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.407 anon_hugepages=0 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44330596 kB' 'MemAvailable: 47838296 kB' 'Buffers: 2704 kB' 'Cached: 11734692 kB' 'SwapCached: 0 kB' 'Active: 8751352 kB' 'Inactive: 3506192 kB' 'Active(anon): 8355856 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523396 kB' 'Mapped: 171312 kB' 'Shmem: 7835708 kB' 'KReclaimable: 199704 kB' 'Slab: 574772 kB' 'SReclaimable: 199704 kB' 'SUnreclaim: 375068 kB' 'KernelStack: 12912 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9431056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 36672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1873500 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 52428800 kB' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.407 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.408 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19925096 kB' 'MemUsed: 12951844 kB' 'SwapCached: 0 kB' 'Active: 6510596 kB' 'Inactive: 3357228 kB' 'Active(anon): 6238664 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9739564 kB' 'Mapped: 97648 kB' 'AnonPages: 131468 kB' 'Shmem: 6110404 kB' 'KernelStack: 6920 kB' 'PageTables: 3192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102020 kB' 'Slab: 315668 kB' 'SReclaimable: 102020 kB' 'SUnreclaim: 213648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.409 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.410 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:17.411 node0=1024 expecting 1024 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:17.411 00:04:17.411 real 0m2.725s 00:04:17.411 user 0m1.097s 00:04:17.411 sys 0m1.547s 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.411 18:04:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.411 ************************************ 00:04:17.411 END TEST no_shrink_alloc 00:04:17.411 ************************************ 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:17.411 18:04:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:17.411 00:04:17.411 real 0m11.136s 00:04:17.411 user 0m4.239s 00:04:17.411 sys 0m5.787s 00:04:17.411 18:04:43 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.411 18:04:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.411 ************************************ 00:04:17.411 END TEST hugepages 00:04:17.411 ************************************ 00:04:17.411 18:04:43 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:17.411 18:04:43 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.411 18:04:43 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.411 18:04:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:17.411 ************************************ 00:04:17.411 START TEST driver 00:04:17.411 ************************************ 00:04:17.411 18:04:43 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:17.669 * Looking for test storage... 00:04:17.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:17.669 18:04:43 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:17.669 18:04:43 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.669 18:04:43 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.208 18:04:46 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:20.208 18:04:46 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.208 18:04:46 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.208 18:04:46 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.208 ************************************ 00:04:20.208 START TEST guess_driver 00:04:20.208 ************************************ 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:20.208 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.208 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.208 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.208 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.208 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:20.208 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:20.208 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:20.208 Looking for driver=vfio-pci 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.208 18:04:46 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.143 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.143 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.143 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.143 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.143 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.143 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.143 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.143 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.143 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.143 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.143 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.143 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.402 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.402 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.403 18:04:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.342 18:04:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.342 18:04:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.343 18:04:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.343 18:04:48 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:22.343 18:04:48 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:22.343 18:04:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.343 18:04:48 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.881 00:04:24.881 real 0m4.703s 00:04:24.881 user 0m1.085s 00:04:24.881 sys 0m1.752s 00:04:24.881 18:04:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.881 18:04:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:24.881 ************************************ 00:04:24.881 END TEST guess_driver 00:04:24.881 ************************************ 00:04:24.881 00:04:24.881 real 0m7.278s 00:04:24.881 user 0m1.683s 00:04:24.881 sys 0m2.719s 00:04:24.881 18:04:50 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.881 18:04:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:24.881 ************************************ 00:04:24.881 END TEST driver 00:04:24.881 ************************************ 00:04:24.881 18:04:50 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:24.881 18:04:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.881 18:04:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.881 18:04:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.881 ************************************ 00:04:24.881 START TEST devices 00:04:24.881 ************************************ 00:04:24.881 18:04:50 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:24.881 * Looking for test storage... 00:04:24.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:24.881 18:04:50 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:24.881 18:04:50 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:24.881 18:04:50 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.881 18:04:50 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:26.261 18:04:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:26.261 18:04:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:26.261 18:04:52 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:26.261 18:04:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:26.261 18:04:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:26.261 18:04:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:26.261 18:04:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.261 18:04:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:26.261 18:04:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:26.261 18:04:52 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:26.261 No valid GPT data, bailing 00:04:26.261 18:04:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:26.261 18:04:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:26.261 18:04:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:26.261 18:04:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:26.261 18:04:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:26.261 18:04:52 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:26.261 18:04:52 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:26.261 18:04:52 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.261 18:04:52 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.261 18:04:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:26.521 ************************************ 00:04:26.521 START TEST nvme_mount 00:04:26.521 ************************************ 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:26.521 18:04:52 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:27.460 Creating new GPT entries in memory. 00:04:27.460 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.460 other utilities. 00:04:27.460 18:04:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.460 18:04:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.460 18:04:53 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.460 18:04:53 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.460 18:04:53 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:28.398 Creating new GPT entries in memory. 00:04:28.398 The operation has completed successfully. 00:04:28.398 18:04:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:28.398 18:04:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.398 18:04:54 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1323119 00:04:28.398 18:04:54 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.398 18:04:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:28.398 18:04:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.398 18:04:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:28.398 18:04:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.399 18:04:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:29.779 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.779 18:04:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.038 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:30.038 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:30.038 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:30.038 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.038 18:04:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.415 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.416 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:31.416 18:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.416 18:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.416 18:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:32.794 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.794 00:04:32.794 real 0m6.306s 00:04:32.794 user 0m1.507s 00:04:32.794 sys 0m2.370s 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.794 18:04:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:32.794 ************************************ 00:04:32.794 END TEST nvme_mount 00:04:32.794 ************************************ 00:04:32.794 18:04:58 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:32.794 18:04:58 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.794 18:04:58 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.794 18:04:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:32.794 ************************************ 00:04:32.794 START TEST dm_mount 00:04:32.794 ************************************ 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:32.794 18:04:58 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:33.732 Creating new GPT entries in memory. 00:04:33.732 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:33.732 other utilities. 00:04:33.732 18:04:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:33.732 18:04:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.732 18:04:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:33.732 18:04:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:33.732 18:04:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:34.699 Creating new GPT entries in memory. 00:04:34.699 The operation has completed successfully. 00:04:34.699 18:05:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:34.700 18:05:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.700 18:05:00 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.700 18:05:00 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.700 18:05:00 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:36.080 The operation has completed successfully. 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1325511 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.080 18:05:01 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:37.016 18:05:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:37.016 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:37.285 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.285 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:37.285 18:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:37.285 18:05:03 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.285 18:05:03 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.228 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.229 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.229 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.490 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.490 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:38.490 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:38.490 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:38.490 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.490 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:38.490 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:38.490 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.490 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:38.490 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.490 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:38.490 18:05:04 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:38.490 00:04:38.490 real 0m5.733s 00:04:38.490 user 0m1.021s 00:04:38.490 sys 0m1.570s 00:04:38.490 18:05:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.490 18:05:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:38.490 ************************************ 00:04:38.490 END TEST dm_mount 00:04:38.490 ************************************ 00:04:38.490 18:05:04 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:38.490 18:05:04 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:38.490 18:05:04 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.490 18:05:04 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.490 18:05:04 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:38.490 18:05:04 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.490 18:05:04 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.749 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:38.749 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:38.749 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:38.749 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:38.749 18:05:04 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:38.749 18:05:04 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.749 18:05:04 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:38.749 18:05:04 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.749 18:05:04 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:38.749 18:05:04 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.749 18:05:04 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:38.749 00:04:38.749 real 0m13.959s 00:04:38.749 user 0m3.212s 00:04:38.749 sys 0m4.938s 00:04:38.749 18:05:04 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.749 18:05:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:38.749 ************************************ 00:04:38.749 END TEST devices 00:04:38.749 ************************************ 00:04:38.749 00:04:38.749 real 0m43.181s 00:04:38.749 user 0m12.457s 00:04:38.749 sys 0m18.939s 00:04:38.749 18:05:04 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.749 18:05:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.749 ************************************ 00:04:38.749 END TEST setup.sh 00:04:38.749 ************************************ 00:04:38.749 18:05:04 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:40.128 Hugepages 00:04:40.128 node hugesize free / total 00:04:40.128 node0 1048576kB 0 / 0 00:04:40.128 node0 2048kB 2048 / 2048 00:04:40.128 node1 1048576kB 0 / 0 00:04:40.128 node1 2048kB 0 / 0 00:04:40.128 00:04:40.128 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:40.128 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:40.128 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:40.128 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:40.128 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:40.128 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:40.128 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:40.128 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:40.129 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:40.129 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:40.129 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:40.129 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:40.129 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:40.129 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:40.129 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:40.129 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:40.129 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:40.129 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:40.129 18:05:06 -- spdk/autotest.sh@130 -- # uname -s 00:04:40.129 18:05:06 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:40.129 18:05:06 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:40.129 18:05:06 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.507 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:41.507 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:41.507 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:41.507 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:41.507 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:41.507 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:41.507 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:41.507 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:41.507 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:41.507 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:41.507 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:41.507 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:41.507 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:41.507 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:41.507 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:41.507 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:42.448 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:42.448 18:05:08 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:43.388 18:05:09 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:43.388 18:05:09 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:43.388 18:05:09 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:43.388 18:05:09 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:43.388 18:05:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:43.388 18:05:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:43.388 18:05:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.388 18:05:09 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:43.388 18:05:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:43.388 18:05:09 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:43.388 18:05:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:43.388 18:05:09 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:44.767 Waiting for block devices as requested 00:04:44.767 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:44.767 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:44.767 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:45.027 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:45.027 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:45.027 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:45.027 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:45.027 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:45.288 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:45.288 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:45.288 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:45.288 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:45.547 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:45.547 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:45.547 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:45.547 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:45.806 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:45.806 18:05:11 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:45.806 18:05:11 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:45.806 18:05:11 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:45.806 18:05:11 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:45.806 18:05:11 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:45.806 18:05:11 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:45.806 18:05:11 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:45.806 18:05:11 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:45.806 18:05:11 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:45.806 18:05:11 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:45.806 18:05:11 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:45.806 18:05:11 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:45.806 18:05:11 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:45.806 18:05:11 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:45.806 18:05:11 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:45.806 18:05:11 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:45.806 18:05:11 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:45.806 18:05:11 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:45.806 18:05:11 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:45.806 18:05:11 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:45.806 18:05:11 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:45.806 18:05:11 -- common/autotest_common.sh@1557 -- # continue 00:04:45.806 18:05:11 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:45.806 18:05:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:45.806 18:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:45.806 18:05:11 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:45.806 18:05:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:45.806 18:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:45.806 18:05:11 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.183 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:47.183 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:47.183 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:47.183 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:47.183 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:47.183 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:47.183 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:47.183 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:47.183 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:47.183 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:47.183 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:47.183 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:47.183 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:47.183 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:47.183 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:47.183 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:48.120 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:48.120 18:05:14 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:48.120 18:05:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.120 18:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:48.120 18:05:14 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:48.120 18:05:14 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:48.120 18:05:14 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:48.120 18:05:14 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:48.120 18:05:14 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:48.120 18:05:14 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:48.120 18:05:14 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:48.120 18:05:14 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:48.120 18:05:14 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:48.120 18:05:14 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:48.120 18:05:14 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:48.380 18:05:14 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:48.380 18:05:14 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:48.380 18:05:14 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:48.380 18:05:14 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:48.380 18:05:14 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:48.380 18:05:14 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:48.380 18:05:14 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:48.380 18:05:14 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:48.380 18:05:14 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:48.380 18:05:14 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1330687 00:04:48.380 18:05:14 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.380 18:05:14 -- common/autotest_common.sh@1598 -- # waitforlisten 1330687 00:04:48.380 18:05:14 -- common/autotest_common.sh@831 -- # '[' -z 1330687 ']' 00:04:48.380 18:05:14 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.380 18:05:14 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.380 18:05:14 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.380 18:05:14 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.380 18:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:48.380 [2024-07-26 18:05:14.332389] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:04:48.380 [2024-07-26 18:05:14.332511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330687 ] 00:04:48.380 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.380 [2024-07-26 18:05:14.365220] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:48.380 [2024-07-26 18:05:14.392311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.380 [2024-07-26 18:05:14.480919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.640 18:05:14 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.640 18:05:14 -- common/autotest_common.sh@864 -- # return 0 00:04:48.640 18:05:14 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:48.640 18:05:14 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:48.640 18:05:14 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:51.933 nvme0n1 00:04:51.933 18:05:17 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:52.201 [2024-07-26 18:05:18.078167] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:52.201 [2024-07-26 18:05:18.078213] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:52.201 request: 00:04:52.201 { 00:04:52.201 "nvme_ctrlr_name": "nvme0", 00:04:52.201 "password": "test", 00:04:52.201 "method": "bdev_nvme_opal_revert", 00:04:52.201 "req_id": 1 00:04:52.201 } 00:04:52.201 Got JSON-RPC error response 00:04:52.201 response: 00:04:52.201 { 00:04:52.201 "code": -32603, 00:04:52.201 "message": "Internal error" 00:04:52.201 } 00:04:52.201 18:05:18 -- common/autotest_common.sh@1604 -- # true 00:04:52.201 18:05:18 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:52.201 18:05:18 -- common/autotest_common.sh@1608 -- # killprocess 1330687 00:04:52.201 18:05:18 -- common/autotest_common.sh@950 -- # '[' -z 1330687 ']' 00:04:52.201 18:05:18 -- common/autotest_common.sh@954 -- # kill -0 1330687 00:04:52.201 18:05:18 -- common/autotest_common.sh@955 -- # uname 00:04:52.201 18:05:18 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.201 18:05:18 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1330687 00:04:52.201 18:05:18 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.201 18:05:18 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.201 18:05:18 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1330687' 00:04:52.201 killing process with pid 1330687 00:04:52.201 18:05:18 -- common/autotest_common.sh@969 -- # kill 1330687 00:04:52.201 18:05:18 -- common/autotest_common.sh@974 -- # wait 1330687 00:04:54.133 18:05:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:54.133 18:05:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:54.133 18:05:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:54.133 18:05:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:54.133 18:05:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:54.133 18:05:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.133 18:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:54.133 18:05:19 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:54.133 18:05:19 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:54.133 18:05:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.133 18:05:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.133 18:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:54.133 ************************************ 00:04:54.133 START TEST env 00:04:54.133 ************************************ 00:04:54.133 18:05:19 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:54.133 * Looking for test storage... 00:04:54.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:54.133 18:05:19 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:54.133 18:05:19 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.133 18:05:19 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.133 18:05:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.133 ************************************ 00:04:54.133 START TEST env_memory 00:04:54.133 ************************************ 00:04:54.133 18:05:19 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:54.133 00:04:54.133 00:04:54.133 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.133 http://cunit.sourceforge.net/ 00:04:54.133 00:04:54.133 00:04:54.133 Suite: memory 00:04:54.133 Test: alloc and free memory map ...[2024-07-26 18:05:20.005218] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:54.133 passed 00:04:54.133 Test: mem map translation ...[2024-07-26 18:05:20.027776] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:54.133 [2024-07-26 18:05:20.027821] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:54.134 [2024-07-26 18:05:20.027868] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:54.134 [2024-07-26 18:05:20.027882] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:54.134 passed 00:04:54.134 Test: mem map registration ...[2024-07-26 18:05:20.071044] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:54.134 [2024-07-26 18:05:20.071083] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:54.134 passed 00:04:54.134 Test: mem map adjacent registrations ...passed 00:04:54.134 00:04:54.134 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.134 suites 1 1 n/a 0 0 00:04:54.134 tests 4 4 4 0 0 00:04:54.134 asserts 152 152 152 0 n/a 00:04:54.134 00:04:54.134 Elapsed time = 0.147 seconds 00:04:54.134 00:04:54.134 real 0m0.156s 00:04:54.134 user 0m0.149s 00:04:54.134 sys 0m0.007s 00:04:54.134 18:05:20 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.134 18:05:20 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:54.134 ************************************ 00:04:54.134 END TEST env_memory 00:04:54.134 ************************************ 00:04:54.134 18:05:20 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:54.134 18:05:20 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.134 18:05:20 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.134 18:05:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.134 ************************************ 00:04:54.134 START TEST env_vtophys 00:04:54.134 ************************************ 00:04:54.134 18:05:20 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:54.134 EAL: lib.eal log level changed from notice to debug 00:04:54.134 EAL: Detected lcore 0 as core 0 on socket 0 00:04:54.134 EAL: Detected lcore 1 as core 1 on socket 0 00:04:54.134 EAL: Detected lcore 2 as core 2 on socket 0 00:04:54.134 EAL: Detected lcore 3 as core 3 on socket 0 00:04:54.134 EAL: Detected lcore 4 as core 4 on socket 0 00:04:54.134 EAL: Detected lcore 5 as core 5 on socket 0 00:04:54.134 EAL: Detected lcore 6 as core 8 on socket 0 00:04:54.134 EAL: Detected lcore 7 as core 9 on socket 0 00:04:54.134 EAL: Detected lcore 8 as core 10 on socket 0 00:04:54.134 EAL: Detected lcore 9 as core 11 on socket 0 00:04:54.134 EAL: Detected lcore 10 as core 12 on socket 0 00:04:54.134 EAL: Detected lcore 11 as core 13 on socket 0 00:04:54.134 EAL: Detected lcore 12 as core 0 on socket 1 00:04:54.134 EAL: Detected lcore 13 as core 1 on socket 1 00:04:54.134 EAL: Detected lcore 14 as core 2 on socket 1 00:04:54.134 EAL: Detected lcore 15 as core 3 on socket 1 00:04:54.134 EAL: Detected lcore 16 as core 4 on socket 1 00:04:54.134 EAL: Detected lcore 17 as core 5 on socket 1 00:04:54.134 EAL: Detected lcore 18 as core 8 on socket 1 00:04:54.134 EAL: Detected lcore 19 as core 9 on socket 1 00:04:54.134 EAL: Detected lcore 20 as core 10 on socket 1 00:04:54.134 EAL: Detected lcore 21 as core 11 on socket 1 00:04:54.134 EAL: Detected lcore 22 as core 12 on socket 1 00:04:54.134 EAL: Detected lcore 23 as core 13 on socket 1 00:04:54.134 EAL: Detected lcore 24 as core 0 on socket 0 00:04:54.134 EAL: Detected lcore 25 as core 1 on socket 0 00:04:54.134 EAL: Detected lcore 26 as core 2 on socket 0 00:04:54.134 EAL: Detected lcore 27 as core 3 on socket 0 00:04:54.134 EAL: Detected lcore 28 as core 4 on socket 0 00:04:54.134 EAL: Detected lcore 29 as core 5 on socket 0 00:04:54.134 EAL: Detected lcore 30 as core 8 on socket 0 00:04:54.134 EAL: Detected lcore 31 as core 9 on socket 0 00:04:54.134 EAL: Detected lcore 32 as core 10 on socket 0 00:04:54.134 EAL: Detected lcore 33 as core 11 on socket 0 00:04:54.134 EAL: Detected lcore 34 as core 12 on socket 0 00:04:54.134 EAL: Detected lcore 35 as core 13 on socket 0 00:04:54.134 EAL: Detected lcore 36 as core 0 on socket 1 00:04:54.134 EAL: Detected lcore 37 as core 1 on socket 1 00:04:54.134 EAL: Detected lcore 38 as core 2 on socket 1 00:04:54.134 EAL: Detected lcore 39 as core 3 on socket 1 00:04:54.134 EAL: Detected lcore 40 as core 4 on socket 1 00:04:54.134 EAL: Detected lcore 41 as core 5 on socket 1 00:04:54.134 EAL: Detected lcore 42 as core 8 on socket 1 00:04:54.134 EAL: Detected lcore 43 as core 9 on socket 1 00:04:54.134 EAL: Detected lcore 44 as core 10 on socket 1 00:04:54.134 EAL: Detected lcore 45 as core 11 on socket 1 00:04:54.134 EAL: Detected lcore 46 as core 12 on socket 1 00:04:54.134 EAL: Detected lcore 47 as core 13 on socket 1 00:04:54.134 EAL: Maximum logical cores by configuration: 128 00:04:54.134 EAL: Detected CPU lcores: 48 00:04:54.134 EAL: Detected NUMA nodes: 2 00:04:54.134 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:54.134 EAL: Detected shared linkage of DPDK 00:04:54.134 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:54.134 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:54.134 EAL: Registered [vdev] bus. 00:04:54.134 EAL: bus.vdev log level changed from disabled to notice 00:04:54.134 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:54.134 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:54.134 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:54.134 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:54.134 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:54.134 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:54.134 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:54.134 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:54.134 EAL: No shared files mode enabled, IPC will be disabled 00:04:54.134 EAL: No shared files mode enabled, IPC is disabled 00:04:54.134 EAL: Bus pci wants IOVA as 'DC' 00:04:54.134 EAL: Bus vdev wants IOVA as 'DC' 00:04:54.134 EAL: Buses did not request a specific IOVA mode. 00:04:54.134 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:54.134 EAL: Selected IOVA mode 'VA' 00:04:54.134 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.134 EAL: Probing VFIO support... 00:04:54.134 EAL: IOMMU type 1 (Type 1) is supported 00:04:54.134 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:54.134 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:54.134 EAL: VFIO support initialized 00:04:54.134 EAL: Ask a virtual area of 0x2e000 bytes 00:04:54.134 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:54.134 EAL: Setting up physically contiguous memory... 00:04:54.134 EAL: Setting maximum number of open files to 524288 00:04:54.134 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:54.134 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:54.134 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:54.134 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.134 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:54.134 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.134 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.134 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:54.134 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:54.134 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.134 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:54.134 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.134 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.134 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:54.134 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:54.134 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.134 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:54.134 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.134 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.134 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:54.134 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:54.134 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.134 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:54.134 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.134 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.134 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:54.134 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:54.134 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:54.134 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.134 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:54.134 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.134 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.134 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:54.134 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:54.134 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.134 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:54.134 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.134 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.134 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:54.134 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:54.134 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.134 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:54.134 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.134 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.134 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:54.134 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:54.134 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.134 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:54.134 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.134 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.134 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:54.134 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:54.134 EAL: Hugepages will be freed exactly as allocated. 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: TSC frequency is ~2700000 KHz 00:04:54.135 EAL: Main lcore 0 is ready (tid=7fc952019a00;cpuset=[0]) 00:04:54.135 EAL: Trying to obtain current memory policy. 00:04:54.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.135 EAL: Restoring previous memory policy: 0 00:04:54.135 EAL: request: mp_malloc_sync 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: Heap on socket 0 was expanded by 2MB 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: Mem event callback 'spdk:(nil)' registered 00:04:54.135 00:04:54.135 00:04:54.135 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.135 http://cunit.sourceforge.net/ 00:04:54.135 00:04:54.135 00:04:54.135 Suite: components_suite 00:04:54.135 Test: vtophys_malloc_test ...passed 00:04:54.135 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:54.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.135 EAL: Restoring previous memory policy: 4 00:04:54.135 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.135 EAL: request: mp_malloc_sync 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: Heap on socket 0 was expanded by 4MB 00:04:54.135 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.135 EAL: request: mp_malloc_sync 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: Heap on socket 0 was shrunk by 4MB 00:04:54.135 EAL: Trying to obtain current memory policy. 00:04:54.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.135 EAL: Restoring previous memory policy: 4 00:04:54.135 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.135 EAL: request: mp_malloc_sync 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: Heap on socket 0 was expanded by 6MB 00:04:54.135 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.135 EAL: request: mp_malloc_sync 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: Heap on socket 0 was shrunk by 6MB 00:04:54.135 EAL: Trying to obtain current memory policy. 00:04:54.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.135 EAL: Restoring previous memory policy: 4 00:04:54.135 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.135 EAL: request: mp_malloc_sync 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: Heap on socket 0 was expanded by 10MB 00:04:54.135 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.135 EAL: request: mp_malloc_sync 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: Heap on socket 0 was shrunk by 10MB 00:04:54.135 EAL: Trying to obtain current memory policy. 00:04:54.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.135 EAL: Restoring previous memory policy: 4 00:04:54.135 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.135 EAL: request: mp_malloc_sync 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: Heap on socket 0 was expanded by 18MB 00:04:54.135 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.135 EAL: request: mp_malloc_sync 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: Heap on socket 0 was shrunk by 18MB 00:04:54.135 EAL: Trying to obtain current memory policy. 00:04:54.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.135 EAL: Restoring previous memory policy: 4 00:04:54.135 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.135 EAL: request: mp_malloc_sync 00:04:54.135 EAL: No shared files mode enabled, IPC is disabled 00:04:54.135 EAL: Heap on socket 0 was expanded by 34MB 00:04:54.135 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.394 EAL: request: mp_malloc_sync 00:04:54.394 EAL: No shared files mode enabled, IPC is disabled 00:04:54.394 EAL: Heap on socket 0 was shrunk by 34MB 00:04:54.394 EAL: Trying to obtain current memory policy. 00:04:54.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.394 EAL: Restoring previous memory policy: 4 00:04:54.394 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.394 EAL: request: mp_malloc_sync 00:04:54.394 EAL: No shared files mode enabled, IPC is disabled 00:04:54.394 EAL: Heap on socket 0 was expanded by 66MB 00:04:54.394 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.394 EAL: request: mp_malloc_sync 00:04:54.394 EAL: No shared files mode enabled, IPC is disabled 00:04:54.394 EAL: Heap on socket 0 was shrunk by 66MB 00:04:54.394 EAL: Trying to obtain current memory policy. 00:04:54.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.394 EAL: Restoring previous memory policy: 4 00:04:54.394 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.394 EAL: request: mp_malloc_sync 00:04:54.394 EAL: No shared files mode enabled, IPC is disabled 00:04:54.394 EAL: Heap on socket 0 was expanded by 130MB 00:04:54.394 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.394 EAL: request: mp_malloc_sync 00:04:54.394 EAL: No shared files mode enabled, IPC is disabled 00:04:54.394 EAL: Heap on socket 0 was shrunk by 130MB 00:04:54.394 EAL: Trying to obtain current memory policy. 00:04:54.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.394 EAL: Restoring previous memory policy: 4 00:04:54.394 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.394 EAL: request: mp_malloc_sync 00:04:54.394 EAL: No shared files mode enabled, IPC is disabled 00:04:54.394 EAL: Heap on socket 0 was expanded by 258MB 00:04:54.394 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.653 EAL: request: mp_malloc_sync 00:04:54.653 EAL: No shared files mode enabled, IPC is disabled 00:04:54.653 EAL: Heap on socket 0 was shrunk by 258MB 00:04:54.653 EAL: Trying to obtain current memory policy. 00:04:54.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.653 EAL: Restoring previous memory policy: 4 00:04:54.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.653 EAL: request: mp_malloc_sync 00:04:54.653 EAL: No shared files mode enabled, IPC is disabled 00:04:54.653 EAL: Heap on socket 0 was expanded by 514MB 00:04:54.913 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.913 EAL: request: mp_malloc_sync 00:04:54.913 EAL: No shared files mode enabled, IPC is disabled 00:04:54.913 EAL: Heap on socket 0 was shrunk by 514MB 00:04:54.913 EAL: Trying to obtain current memory policy. 00:04:54.913 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.172 EAL: Restoring previous memory policy: 4 00:04:55.172 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.173 EAL: request: mp_malloc_sync 00:04:55.173 EAL: No shared files mode enabled, IPC is disabled 00:04:55.173 EAL: Heap on socket 0 was expanded by 1026MB 00:04:55.431 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.690 EAL: request: mp_malloc_sync 00:04:55.690 EAL: No shared files mode enabled, IPC is disabled 00:04:55.690 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:55.690 passed 00:04:55.690 00:04:55.691 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.691 suites 1 1 n/a 0 0 00:04:55.691 tests 2 2 2 0 0 00:04:55.691 asserts 497 497 497 0 n/a 00:04:55.691 00:04:55.691 Elapsed time = 1.364 seconds 00:04:55.691 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.691 EAL: request: mp_malloc_sync 00:04:55.691 EAL: No shared files mode enabled, IPC is disabled 00:04:55.691 EAL: Heap on socket 0 was shrunk by 2MB 00:04:55.691 EAL: No shared files mode enabled, IPC is disabled 00:04:55.691 EAL: No shared files mode enabled, IPC is disabled 00:04:55.691 EAL: No shared files mode enabled, IPC is disabled 00:04:55.691 00:04:55.691 real 0m1.480s 00:04:55.691 user 0m0.845s 00:04:55.691 sys 0m0.607s 00:04:55.691 18:05:21 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.691 18:05:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:55.691 ************************************ 00:04:55.691 END TEST env_vtophys 00:04:55.691 ************************************ 00:04:55.691 18:05:21 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:55.691 18:05:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.691 18:05:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.691 18:05:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.691 ************************************ 00:04:55.691 START TEST env_pci 00:04:55.691 ************************************ 00:04:55.691 18:05:21 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:55.691 00:04:55.691 00:04:55.691 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.691 http://cunit.sourceforge.net/ 00:04:55.691 00:04:55.691 00:04:55.691 Suite: pci 00:04:55.691 Test: pci_hook ...[2024-07-26 18:05:21.714390] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1331582 has claimed it 00:04:55.691 EAL: Cannot find device (10000:00:01.0) 00:04:55.691 EAL: Failed to attach device on primary process 00:04:55.691 passed 00:04:55.691 00:04:55.691 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.691 suites 1 1 n/a 0 0 00:04:55.691 tests 1 1 1 0 0 00:04:55.691 asserts 25 25 25 0 n/a 00:04:55.691 00:04:55.691 Elapsed time = 0.021 seconds 00:04:55.691 00:04:55.691 real 0m0.033s 00:04:55.691 user 0m0.010s 00:04:55.691 sys 0m0.023s 00:04:55.691 18:05:21 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.691 18:05:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:55.691 ************************************ 00:04:55.691 END TEST env_pci 00:04:55.691 ************************************ 00:04:55.691 18:05:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:55.691 18:05:21 env -- env/env.sh@15 -- # uname 00:04:55.691 18:05:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:55.691 18:05:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:55.691 18:05:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.691 18:05:21 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:55.691 18:05:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.691 18:05:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.691 ************************************ 00:04:55.691 START TEST env_dpdk_post_init 00:04:55.691 ************************************ 00:04:55.691 18:05:21 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.691 EAL: Detected CPU lcores: 48 00:04:55.691 EAL: Detected NUMA nodes: 2 00:04:55.691 EAL: Detected shared linkage of DPDK 00:04:55.691 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.691 EAL: Selected IOVA mode 'VA' 00:04:55.691 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.691 EAL: VFIO support initialized 00:04:55.950 EAL: Using IOMMU type 1 (Type 1) 00:05:00.141 Starting DPDK initialization... 00:05:00.141 Starting SPDK post initialization... 00:05:00.141 SPDK NVMe probe 00:05:00.141 Attaching to 0000:88:00.0 00:05:00.141 Attached to 0000:88:00.0 00:05:00.141 Cleaning up... 00:05:00.141 00:05:00.141 real 0m4.384s 00:05:00.141 user 0m3.249s 00:05:00.141 sys 0m0.199s 00:05:00.141 18:05:26 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.141 18:05:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.141 ************************************ 00:05:00.141 END TEST env_dpdk_post_init 00:05:00.141 ************************************ 00:05:00.141 18:05:26 env -- env/env.sh@26 -- # uname 00:05:00.141 18:05:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:00.141 18:05:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.141 18:05:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.141 18:05:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.141 18:05:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.141 ************************************ 00:05:00.141 START TEST env_mem_callbacks 00:05:00.141 ************************************ 00:05:00.141 18:05:26 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.141 EAL: Detected CPU lcores: 48 00:05:00.141 EAL: Detected NUMA nodes: 2 00:05:00.141 EAL: Detected shared linkage of DPDK 00:05:00.141 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.141 EAL: Selected IOVA mode 'VA' 00:05:00.141 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.141 EAL: VFIO support initialized 00:05:00.141 00:05:00.141 00:05:00.141 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.141 http://cunit.sourceforge.net/ 00:05:00.141 00:05:00.141 00:05:00.141 Suite: memory 00:05:00.141 Test: test ... 00:05:00.141 register 0x200000200000 2097152 00:05:00.141 malloc 3145728 00:05:00.141 register 0x200000400000 4194304 00:05:00.141 buf 0x200000500000 len 3145728 PASSED 00:05:00.141 malloc 64 00:05:00.141 buf 0x2000004fff40 len 64 PASSED 00:05:00.141 malloc 4194304 00:05:00.141 register 0x200000800000 6291456 00:05:00.141 buf 0x200000a00000 len 4194304 PASSED 00:05:00.141 free 0x200000500000 3145728 00:05:00.141 free 0x2000004fff40 64 00:05:00.141 unregister 0x200000400000 4194304 PASSED 00:05:00.141 free 0x200000a00000 4194304 00:05:00.141 unregister 0x200000800000 6291456 PASSED 00:05:00.141 malloc 8388608 00:05:00.141 register 0x200000400000 10485760 00:05:00.141 buf 0x200000600000 len 8388608 PASSED 00:05:00.141 free 0x200000600000 8388608 00:05:00.141 unregister 0x200000400000 10485760 PASSED 00:05:00.141 passed 00:05:00.141 00:05:00.141 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.141 suites 1 1 n/a 0 0 00:05:00.141 tests 1 1 1 0 0 00:05:00.141 asserts 15 15 15 0 n/a 00:05:00.141 00:05:00.141 Elapsed time = 0.005 seconds 00:05:00.141 00:05:00.141 real 0m0.048s 00:05:00.141 user 0m0.017s 00:05:00.141 sys 0m0.031s 00:05:00.141 18:05:26 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.141 18:05:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:00.141 ************************************ 00:05:00.141 END TEST env_mem_callbacks 00:05:00.141 ************************************ 00:05:00.141 00:05:00.141 real 0m6.392s 00:05:00.141 user 0m4.393s 00:05:00.141 sys 0m1.051s 00:05:00.400 18:05:26 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.400 18:05:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.400 ************************************ 00:05:00.400 END TEST env 00:05:00.400 ************************************ 00:05:00.400 18:05:26 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:00.400 18:05:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.400 18:05:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.400 18:05:26 -- common/autotest_common.sh@10 -- # set +x 00:05:00.400 ************************************ 00:05:00.400 START TEST rpc 00:05:00.400 ************************************ 00:05:00.400 18:05:26 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:00.400 * Looking for test storage... 00:05:00.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.400 18:05:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1332238 00:05:00.400 18:05:26 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:00.400 18:05:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.400 18:05:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1332238 00:05:00.400 18:05:26 rpc -- common/autotest_common.sh@831 -- # '[' -z 1332238 ']' 00:05:00.400 18:05:26 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.400 18:05:26 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.400 18:05:26 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.400 18:05:26 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.400 18:05:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.400 [2024-07-26 18:05:26.439416] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:00.400 [2024-07-26 18:05:26.439512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332238 ] 00:05:00.400 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.400 [2024-07-26 18:05:26.470732] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:00.400 [2024-07-26 18:05:26.501421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.658 [2024-07-26 18:05:26.595370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:00.658 [2024-07-26 18:05:26.595443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1332238' to capture a snapshot of events at runtime. 00:05:00.658 [2024-07-26 18:05:26.595459] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:00.658 [2024-07-26 18:05:26.595472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:00.658 [2024-07-26 18:05:26.595484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1332238 for offline analysis/debug. 00:05:00.658 [2024-07-26 18:05:26.595515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.917 18:05:26 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.917 18:05:26 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:00.917 18:05:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.917 18:05:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.917 18:05:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:00.917 18:05:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:00.917 18:05:26 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.917 18:05:26 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.917 18:05:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.917 ************************************ 00:05:00.917 START TEST rpc_integrity 00:05:00.917 ************************************ 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:00.917 18:05:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.917 18:05:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.917 18:05:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.917 18:05:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.917 18:05:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.917 18:05:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:00.917 18:05:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.917 18:05:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.917 { 00:05:00.917 "name": "Malloc0", 00:05:00.917 "aliases": [ 00:05:00.917 "8ed558bb-f5d9-40d7-8b57-47f04229bc1b" 00:05:00.917 ], 00:05:00.917 "product_name": "Malloc disk", 00:05:00.917 "block_size": 512, 00:05:00.917 "num_blocks": 16384, 00:05:00.917 "uuid": "8ed558bb-f5d9-40d7-8b57-47f04229bc1b", 00:05:00.917 "assigned_rate_limits": { 00:05:00.917 "rw_ios_per_sec": 0, 00:05:00.917 "rw_mbytes_per_sec": 0, 00:05:00.917 "r_mbytes_per_sec": 0, 00:05:00.917 "w_mbytes_per_sec": 0 00:05:00.917 }, 00:05:00.917 "claimed": false, 00:05:00.917 "zoned": false, 00:05:00.917 "supported_io_types": { 00:05:00.917 "read": true, 00:05:00.917 "write": true, 00:05:00.917 "unmap": true, 00:05:00.917 "flush": true, 00:05:00.917 "reset": true, 00:05:00.917 "nvme_admin": false, 00:05:00.917 "nvme_io": false, 00:05:00.917 "nvme_io_md": false, 00:05:00.917 "write_zeroes": true, 00:05:00.917 "zcopy": true, 00:05:00.917 "get_zone_info": false, 00:05:00.917 "zone_management": false, 00:05:00.917 "zone_append": false, 00:05:00.917 "compare": false, 00:05:00.917 "compare_and_write": false, 00:05:00.917 "abort": true, 00:05:00.917 "seek_hole": false, 00:05:00.917 "seek_data": false, 00:05:00.917 "copy": true, 00:05:00.917 "nvme_iov_md": false 00:05:00.917 }, 00:05:00.917 "memory_domains": [ 00:05:00.917 { 00:05:00.917 "dma_device_id": "system", 00:05:00.917 "dma_device_type": 1 00:05:00.917 }, 00:05:00.917 { 00:05:00.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.917 "dma_device_type": 2 00:05:00.917 } 00:05:00.917 ], 00:05:00.917 "driver_specific": {} 00:05:00.917 } 00:05:00.917 ]' 00:05:00.917 18:05:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:00.917 18:05:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.917 18:05:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.917 [2024-07-26 18:05:26.985074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:00.917 [2024-07-26 18:05:26.985131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.917 [2024-07-26 18:05:26.985153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9c47f0 00:05:00.917 [2024-07-26 18:05:26.985167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.917 [2024-07-26 18:05:26.986729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.917 [2024-07-26 18:05:26.986757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.917 Passthru0 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.917 18:05:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.917 18:05:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.917 18:05:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.917 18:05:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.917 { 00:05:00.917 "name": "Malloc0", 00:05:00.917 "aliases": [ 00:05:00.917 "8ed558bb-f5d9-40d7-8b57-47f04229bc1b" 00:05:00.917 ], 00:05:00.917 "product_name": "Malloc disk", 00:05:00.917 "block_size": 512, 00:05:00.917 "num_blocks": 16384, 00:05:00.917 "uuid": "8ed558bb-f5d9-40d7-8b57-47f04229bc1b", 00:05:00.917 "assigned_rate_limits": { 00:05:00.917 "rw_ios_per_sec": 0, 00:05:00.917 "rw_mbytes_per_sec": 0, 00:05:00.917 "r_mbytes_per_sec": 0, 00:05:00.917 "w_mbytes_per_sec": 0 00:05:00.917 }, 00:05:00.917 "claimed": true, 00:05:00.917 "claim_type": "exclusive_write", 00:05:00.917 "zoned": false, 00:05:00.917 "supported_io_types": { 00:05:00.917 "read": true, 00:05:00.917 "write": true, 00:05:00.917 "unmap": true, 00:05:00.917 "flush": true, 00:05:00.917 "reset": true, 00:05:00.917 "nvme_admin": false, 00:05:00.917 "nvme_io": false, 00:05:00.917 "nvme_io_md": false, 00:05:00.917 "write_zeroes": true, 00:05:00.917 "zcopy": true, 00:05:00.918 "get_zone_info": false, 00:05:00.918 "zone_management": false, 00:05:00.918 "zone_append": false, 00:05:00.918 "compare": false, 00:05:00.918 "compare_and_write": false, 00:05:00.918 "abort": true, 00:05:00.918 "seek_hole": false, 00:05:00.918 "seek_data": false, 00:05:00.918 "copy": true, 00:05:00.918 "nvme_iov_md": false 00:05:00.918 }, 00:05:00.918 "memory_domains": [ 00:05:00.918 { 00:05:00.918 "dma_device_id": "system", 00:05:00.918 "dma_device_type": 1 00:05:00.918 }, 00:05:00.918 { 00:05:00.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.918 "dma_device_type": 2 00:05:00.918 } 00:05:00.918 ], 00:05:00.918 "driver_specific": {} 00:05:00.918 }, 00:05:00.918 { 00:05:00.918 "name": "Passthru0", 00:05:00.918 "aliases": [ 00:05:00.918 "54943f77-6ea8-524a-b66d-68654dd0562c" 00:05:00.918 ], 00:05:00.918 "product_name": "passthru", 00:05:00.918 "block_size": 512, 00:05:00.918 "num_blocks": 16384, 00:05:00.918 "uuid": "54943f77-6ea8-524a-b66d-68654dd0562c", 00:05:00.918 "assigned_rate_limits": { 00:05:00.918 "rw_ios_per_sec": 0, 00:05:00.918 "rw_mbytes_per_sec": 0, 00:05:00.918 "r_mbytes_per_sec": 0, 00:05:00.918 "w_mbytes_per_sec": 0 00:05:00.918 }, 00:05:00.918 "claimed": false, 00:05:00.918 "zoned": false, 00:05:00.918 "supported_io_types": { 00:05:00.918 "read": true, 00:05:00.918 "write": true, 00:05:00.918 "unmap": true, 00:05:00.918 "flush": true, 00:05:00.918 "reset": true, 00:05:00.918 "nvme_admin": false, 00:05:00.918 "nvme_io": false, 00:05:00.918 "nvme_io_md": false, 00:05:00.918 "write_zeroes": true, 00:05:00.918 "zcopy": true, 00:05:00.918 "get_zone_info": false, 00:05:00.918 "zone_management": false, 00:05:00.918 "zone_append": false, 00:05:00.918 "compare": false, 00:05:00.918 "compare_and_write": false, 00:05:00.918 "abort": true, 00:05:00.918 "seek_hole": false, 00:05:00.918 "seek_data": false, 00:05:00.918 "copy": true, 00:05:00.918 "nvme_iov_md": false 00:05:00.918 }, 00:05:00.918 "memory_domains": [ 00:05:00.918 { 00:05:00.918 "dma_device_id": "system", 00:05:00.918 "dma_device_type": 1 00:05:00.918 }, 00:05:00.918 { 00:05:00.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.918 "dma_device_type": 2 00:05:00.918 } 00:05:00.918 ], 00:05:00.918 "driver_specific": { 00:05:00.918 "passthru": { 00:05:00.918 "name": "Passthru0", 00:05:00.918 "base_bdev_name": "Malloc0" 00:05:00.918 } 00:05:00.918 } 00:05:00.918 } 00:05:00.918 ]' 00:05:00.918 18:05:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.918 18:05:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.918 18:05:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.918 18:05:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.918 18:05:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.918 18:05:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.918 18:05:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:00.918 18:05:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.918 18:05:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.176 18:05:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.176 18:05:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.176 18:05:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.176 18:05:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.176 18:05:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.176 18:05:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.176 18:05:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.176 18:05:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.176 00:05:01.176 real 0m0.230s 00:05:01.176 user 0m0.154s 00:05:01.176 sys 0m0.021s 00:05:01.176 18:05:27 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.176 18:05:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.176 ************************************ 00:05:01.176 END TEST rpc_integrity 00:05:01.176 ************************************ 00:05:01.176 18:05:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:01.176 18:05:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.176 18:05:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.176 18:05:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.176 ************************************ 00:05:01.176 START TEST rpc_plugins 00:05:01.176 ************************************ 00:05:01.176 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:01.176 18:05:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:01.176 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.176 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.176 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.176 18:05:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:01.176 18:05:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:01.176 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.176 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.176 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.176 18:05:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:01.176 { 00:05:01.176 "name": "Malloc1", 00:05:01.176 "aliases": [ 00:05:01.176 "3f16aa44-f983-4a4f-b984-a735b7c7fd73" 00:05:01.176 ], 00:05:01.176 "product_name": "Malloc disk", 00:05:01.176 "block_size": 4096, 00:05:01.176 "num_blocks": 256, 00:05:01.176 "uuid": "3f16aa44-f983-4a4f-b984-a735b7c7fd73", 00:05:01.176 "assigned_rate_limits": { 00:05:01.176 "rw_ios_per_sec": 0, 00:05:01.176 "rw_mbytes_per_sec": 0, 00:05:01.176 "r_mbytes_per_sec": 0, 00:05:01.176 "w_mbytes_per_sec": 0 00:05:01.176 }, 00:05:01.176 "claimed": false, 00:05:01.176 "zoned": false, 00:05:01.176 "supported_io_types": { 00:05:01.176 "read": true, 00:05:01.176 "write": true, 00:05:01.176 "unmap": true, 00:05:01.176 "flush": true, 00:05:01.176 "reset": true, 00:05:01.176 "nvme_admin": false, 00:05:01.176 "nvme_io": false, 00:05:01.176 "nvme_io_md": false, 00:05:01.176 "write_zeroes": true, 00:05:01.176 "zcopy": true, 00:05:01.176 "get_zone_info": false, 00:05:01.176 "zone_management": false, 00:05:01.176 "zone_append": false, 00:05:01.176 "compare": false, 00:05:01.176 "compare_and_write": false, 00:05:01.176 "abort": true, 00:05:01.176 "seek_hole": false, 00:05:01.176 "seek_data": false, 00:05:01.176 "copy": true, 00:05:01.176 "nvme_iov_md": false 00:05:01.176 }, 00:05:01.176 "memory_domains": [ 00:05:01.176 { 00:05:01.176 "dma_device_id": "system", 00:05:01.176 "dma_device_type": 1 00:05:01.176 }, 00:05:01.176 { 00:05:01.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.176 "dma_device_type": 2 00:05:01.176 } 00:05:01.176 ], 00:05:01.176 "driver_specific": {} 00:05:01.176 } 00:05:01.176 ]' 00:05:01.177 18:05:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:01.177 18:05:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:01.177 18:05:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:01.177 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.177 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.177 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.177 18:05:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:01.177 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.177 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.177 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.177 18:05:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:01.177 18:05:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:01.177 18:05:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:01.177 00:05:01.177 real 0m0.114s 00:05:01.177 user 0m0.077s 00:05:01.177 sys 0m0.008s 00:05:01.177 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.177 18:05:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.177 ************************************ 00:05:01.177 END TEST rpc_plugins 00:05:01.177 ************************************ 00:05:01.177 18:05:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:01.177 18:05:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.177 18:05:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.177 18:05:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.177 ************************************ 00:05:01.177 START TEST rpc_trace_cmd_test 00:05:01.177 ************************************ 00:05:01.177 18:05:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:01.177 18:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:01.177 18:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:01.177 18:05:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.177 18:05:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:01.435 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1332238", 00:05:01.435 "tpoint_group_mask": "0x8", 00:05:01.435 "iscsi_conn": { 00:05:01.435 "mask": "0x2", 00:05:01.435 "tpoint_mask": "0x0" 00:05:01.435 }, 00:05:01.435 "scsi": { 00:05:01.435 "mask": "0x4", 00:05:01.435 "tpoint_mask": "0x0" 00:05:01.435 }, 00:05:01.435 "bdev": { 00:05:01.435 "mask": "0x8", 00:05:01.435 "tpoint_mask": "0xffffffffffffffff" 00:05:01.435 }, 00:05:01.435 "nvmf_rdma": { 00:05:01.435 "mask": "0x10", 00:05:01.435 "tpoint_mask": "0x0" 00:05:01.435 }, 00:05:01.435 "nvmf_tcp": { 00:05:01.435 "mask": "0x20", 00:05:01.435 "tpoint_mask": "0x0" 00:05:01.435 }, 00:05:01.435 "ftl": { 00:05:01.435 "mask": "0x40", 00:05:01.435 "tpoint_mask": "0x0" 00:05:01.435 }, 00:05:01.435 "blobfs": { 00:05:01.435 "mask": "0x80", 00:05:01.435 "tpoint_mask": "0x0" 00:05:01.435 }, 00:05:01.435 "dsa": { 00:05:01.435 "mask": "0x200", 00:05:01.435 "tpoint_mask": "0x0" 00:05:01.435 }, 00:05:01.435 "thread": { 00:05:01.435 "mask": "0x400", 00:05:01.435 "tpoint_mask": "0x0" 00:05:01.435 }, 00:05:01.435 "nvme_pcie": { 00:05:01.435 "mask": "0x800", 00:05:01.435 "tpoint_mask": "0x0" 00:05:01.435 }, 00:05:01.435 "iaa": { 00:05:01.435 "mask": "0x1000", 00:05:01.435 "tpoint_mask": "0x0" 00:05:01.435 }, 00:05:01.435 "nvme_tcp": { 00:05:01.435 "mask": "0x2000", 00:05:01.435 "tpoint_mask": "0x0" 00:05:01.435 }, 00:05:01.435 "bdev_nvme": { 00:05:01.435 "mask": "0x4000", 00:05:01.435 "tpoint_mask": "0x0" 00:05:01.435 }, 00:05:01.435 "sock": { 00:05:01.435 "mask": "0x8000", 00:05:01.435 "tpoint_mask": "0x0" 00:05:01.435 } 00:05:01.435 }' 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:01.435 00:05:01.435 real 0m0.193s 00:05:01.435 user 0m0.167s 00:05:01.435 sys 0m0.019s 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.435 18:05:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:01.435 ************************************ 00:05:01.435 END TEST rpc_trace_cmd_test 00:05:01.435 ************************************ 00:05:01.435 18:05:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:01.435 18:05:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:01.435 18:05:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:01.435 18:05:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.435 18:05:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.435 18:05:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.435 ************************************ 00:05:01.435 START TEST rpc_daemon_integrity 00:05:01.435 ************************************ 00:05:01.435 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:01.435 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:01.435 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.435 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.435 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.435 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:01.435 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:01.693 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:01.693 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:01.693 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.693 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.693 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.693 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:01.693 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:01.693 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.693 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.693 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.693 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:01.693 { 00:05:01.693 "name": "Malloc2", 00:05:01.693 "aliases": [ 00:05:01.693 "1fa6b996-2fb1-4665-8e63-e97df3da05f1" 00:05:01.693 ], 00:05:01.694 "product_name": "Malloc disk", 00:05:01.694 "block_size": 512, 00:05:01.694 "num_blocks": 16384, 00:05:01.694 "uuid": "1fa6b996-2fb1-4665-8e63-e97df3da05f1", 00:05:01.694 "assigned_rate_limits": { 00:05:01.694 "rw_ios_per_sec": 0, 00:05:01.694 "rw_mbytes_per_sec": 0, 00:05:01.694 "r_mbytes_per_sec": 0, 00:05:01.694 "w_mbytes_per_sec": 0 00:05:01.694 }, 00:05:01.694 "claimed": false, 00:05:01.694 "zoned": false, 00:05:01.694 "supported_io_types": { 00:05:01.694 "read": true, 00:05:01.694 "write": true, 00:05:01.694 "unmap": true, 00:05:01.694 "flush": true, 00:05:01.694 "reset": true, 00:05:01.694 "nvme_admin": false, 00:05:01.694 "nvme_io": false, 00:05:01.694 "nvme_io_md": false, 00:05:01.694 "write_zeroes": true, 00:05:01.694 "zcopy": true, 00:05:01.694 "get_zone_info": false, 00:05:01.694 "zone_management": false, 00:05:01.694 "zone_append": false, 00:05:01.694 "compare": false, 00:05:01.694 "compare_and_write": false, 00:05:01.694 "abort": true, 00:05:01.694 "seek_hole": false, 00:05:01.694 "seek_data": false, 00:05:01.694 "copy": true, 00:05:01.694 "nvme_iov_md": false 00:05:01.694 }, 00:05:01.694 "memory_domains": [ 00:05:01.694 { 00:05:01.694 "dma_device_id": "system", 00:05:01.694 "dma_device_type": 1 00:05:01.694 }, 00:05:01.694 { 00:05:01.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.694 "dma_device_type": 2 00:05:01.694 } 00:05:01.694 ], 00:05:01.694 "driver_specific": {} 00:05:01.694 } 00:05:01.694 ]' 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.694 [2024-07-26 18:05:27.663384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:01.694 [2024-07-26 18:05:27.663427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.694 [2024-07-26 18:05:27.663450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb68490 00:05:01.694 [2024-07-26 18:05:27.663466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.694 [2024-07-26 18:05:27.664848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.694 [2024-07-26 18:05:27.664877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.694 Passthru0 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.694 { 00:05:01.694 "name": "Malloc2", 00:05:01.694 "aliases": [ 00:05:01.694 "1fa6b996-2fb1-4665-8e63-e97df3da05f1" 00:05:01.694 ], 00:05:01.694 "product_name": "Malloc disk", 00:05:01.694 "block_size": 512, 00:05:01.694 "num_blocks": 16384, 00:05:01.694 "uuid": "1fa6b996-2fb1-4665-8e63-e97df3da05f1", 00:05:01.694 "assigned_rate_limits": { 00:05:01.694 "rw_ios_per_sec": 0, 00:05:01.694 "rw_mbytes_per_sec": 0, 00:05:01.694 "r_mbytes_per_sec": 0, 00:05:01.694 "w_mbytes_per_sec": 0 00:05:01.694 }, 00:05:01.694 "claimed": true, 00:05:01.694 "claim_type": "exclusive_write", 00:05:01.694 "zoned": false, 00:05:01.694 "supported_io_types": { 00:05:01.694 "read": true, 00:05:01.694 "write": true, 00:05:01.694 "unmap": true, 00:05:01.694 "flush": true, 00:05:01.694 "reset": true, 00:05:01.694 "nvme_admin": false, 00:05:01.694 "nvme_io": false, 00:05:01.694 "nvme_io_md": false, 00:05:01.694 "write_zeroes": true, 00:05:01.694 "zcopy": true, 00:05:01.694 "get_zone_info": false, 00:05:01.694 "zone_management": false, 00:05:01.694 "zone_append": false, 00:05:01.694 "compare": false, 00:05:01.694 "compare_and_write": false, 00:05:01.694 "abort": true, 00:05:01.694 "seek_hole": false, 00:05:01.694 "seek_data": false, 00:05:01.694 "copy": true, 00:05:01.694 "nvme_iov_md": false 00:05:01.694 }, 00:05:01.694 "memory_domains": [ 00:05:01.694 { 00:05:01.694 "dma_device_id": "system", 00:05:01.694 "dma_device_type": 1 00:05:01.694 }, 00:05:01.694 { 00:05:01.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.694 "dma_device_type": 2 00:05:01.694 } 00:05:01.694 ], 00:05:01.694 "driver_specific": {} 00:05:01.694 }, 00:05:01.694 { 00:05:01.694 "name": "Passthru0", 00:05:01.694 "aliases": [ 00:05:01.694 "f74e1b50-c299-5772-b794-b8ac0f00244f" 00:05:01.694 ], 00:05:01.694 "product_name": "passthru", 00:05:01.694 "block_size": 512, 00:05:01.694 "num_blocks": 16384, 00:05:01.694 "uuid": "f74e1b50-c299-5772-b794-b8ac0f00244f", 00:05:01.694 "assigned_rate_limits": { 00:05:01.694 "rw_ios_per_sec": 0, 00:05:01.694 "rw_mbytes_per_sec": 0, 00:05:01.694 "r_mbytes_per_sec": 0, 00:05:01.694 "w_mbytes_per_sec": 0 00:05:01.694 }, 00:05:01.694 "claimed": false, 00:05:01.694 "zoned": false, 00:05:01.694 "supported_io_types": { 00:05:01.694 "read": true, 00:05:01.694 "write": true, 00:05:01.694 "unmap": true, 00:05:01.694 "flush": true, 00:05:01.694 "reset": true, 00:05:01.694 "nvme_admin": false, 00:05:01.694 "nvme_io": false, 00:05:01.694 "nvme_io_md": false, 00:05:01.694 "write_zeroes": true, 00:05:01.694 "zcopy": true, 00:05:01.694 "get_zone_info": false, 00:05:01.694 "zone_management": false, 00:05:01.694 "zone_append": false, 00:05:01.694 "compare": false, 00:05:01.694 "compare_and_write": false, 00:05:01.694 "abort": true, 00:05:01.694 "seek_hole": false, 00:05:01.694 "seek_data": false, 00:05:01.694 "copy": true, 00:05:01.694 "nvme_iov_md": false 00:05:01.694 }, 00:05:01.694 "memory_domains": [ 00:05:01.694 { 00:05:01.694 "dma_device_id": "system", 00:05:01.694 "dma_device_type": 1 00:05:01.694 }, 00:05:01.694 { 00:05:01.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.694 "dma_device_type": 2 00:05:01.694 } 00:05:01.694 ], 00:05:01.694 "driver_specific": { 00:05:01.694 "passthru": { 00:05:01.694 "name": "Passthru0", 00:05:01.694 "base_bdev_name": "Malloc2" 00:05:01.694 } 00:05:01.694 } 00:05:01.694 } 00:05:01.694 ]' 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.694 00:05:01.694 real 0m0.227s 00:05:01.694 user 0m0.148s 00:05:01.694 sys 0m0.024s 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.694 18:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.694 ************************************ 00:05:01.694 END TEST rpc_daemon_integrity 00:05:01.694 ************************************ 00:05:01.694 18:05:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:01.694 18:05:27 rpc -- rpc/rpc.sh@84 -- # killprocess 1332238 00:05:01.694 18:05:27 rpc -- common/autotest_common.sh@950 -- # '[' -z 1332238 ']' 00:05:01.694 18:05:27 rpc -- common/autotest_common.sh@954 -- # kill -0 1332238 00:05:01.694 18:05:27 rpc -- common/autotest_common.sh@955 -- # uname 00:05:01.694 18:05:27 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.694 18:05:27 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1332238 00:05:01.695 18:05:27 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.695 18:05:27 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.695 18:05:27 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1332238' 00:05:01.695 killing process with pid 1332238 00:05:01.695 18:05:27 rpc -- common/autotest_common.sh@969 -- # kill 1332238 00:05:01.695 18:05:27 rpc -- common/autotest_common.sh@974 -- # wait 1332238 00:05:02.260 00:05:02.260 real 0m1.900s 00:05:02.260 user 0m2.412s 00:05:02.260 sys 0m0.584s 00:05:02.260 18:05:28 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.260 18:05:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.260 ************************************ 00:05:02.260 END TEST rpc 00:05:02.260 ************************************ 00:05:02.260 18:05:28 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:02.260 18:05:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.260 18:05:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.260 18:05:28 -- common/autotest_common.sh@10 -- # set +x 00:05:02.260 ************************************ 00:05:02.260 START TEST skip_rpc 00:05:02.260 ************************************ 00:05:02.260 18:05:28 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:02.260 * Looking for test storage... 00:05:02.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:02.260 18:05:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:02.260 18:05:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:02.260 18:05:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:02.260 18:05:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.260 18:05:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.260 18:05:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.260 ************************************ 00:05:02.260 START TEST skip_rpc 00:05:02.260 ************************************ 00:05:02.260 18:05:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:02.260 18:05:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1332672 00:05:02.260 18:05:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:02.260 18:05:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.260 18:05:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:02.518 [2024-07-26 18:05:28.420273] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:02.518 [2024-07-26 18:05:28.420369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332672 ] 00:05:02.518 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.518 [2024-07-26 18:05:28.451907] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:02.518 [2024-07-26 18:05:28.484122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.518 [2024-07-26 18:05:28.577745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1332672 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1332672 ']' 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1332672 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1332672 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1332672' 00:05:07.783 killing process with pid 1332672 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1332672 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1332672 00:05:07.783 00:05:07.783 real 0m5.457s 00:05:07.783 user 0m5.132s 00:05:07.783 sys 0m0.329s 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.783 18:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.783 ************************************ 00:05:07.783 END TEST skip_rpc 00:05:07.783 ************************************ 00:05:07.783 18:05:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:07.783 18:05:33 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.783 18:05:33 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.783 18:05:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.783 ************************************ 00:05:07.783 START TEST skip_rpc_with_json 00:05:07.783 ************************************ 00:05:07.783 18:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:07.783 18:05:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:07.783 18:05:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1333360 00:05:07.783 18:05:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.783 18:05:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.783 18:05:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1333360 00:05:07.783 18:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1333360 ']' 00:05:07.783 18:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.783 18:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.783 18:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.783 18:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.783 18:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.783 [2024-07-26 18:05:33.924164] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:07.783 [2024-07-26 18:05:33.924241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333360 ] 00:05:08.042 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.042 [2024-07-26 18:05:33.954800] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:08.042 [2024-07-26 18:05:33.986313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.042 [2024-07-26 18:05:34.073833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.301 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.301 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:08.301 18:05:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:08.301 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.301 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.301 [2024-07-26 18:05:34.329720] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:08.301 request: 00:05:08.301 { 00:05:08.301 "trtype": "tcp", 00:05:08.301 "method": "nvmf_get_transports", 00:05:08.301 "req_id": 1 00:05:08.301 } 00:05:08.301 Got JSON-RPC error response 00:05:08.301 response: 00:05:08.301 { 00:05:08.301 "code": -19, 00:05:08.301 "message": "No such device" 00:05:08.301 } 00:05:08.301 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:08.301 18:05:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:08.301 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.301 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.301 [2024-07-26 18:05:34.337850] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.301 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.301 18:05:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:08.301 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.301 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.560 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.560 18:05:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:08.560 { 00:05:08.560 "subsystems": [ 00:05:08.560 { 00:05:08.560 "subsystem": "vfio_user_target", 00:05:08.560 "config": null 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "subsystem": "keyring", 00:05:08.560 "config": [] 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "subsystem": "iobuf", 00:05:08.560 "config": [ 00:05:08.560 { 00:05:08.560 "method": "iobuf_set_options", 00:05:08.560 "params": { 00:05:08.560 "small_pool_count": 8192, 00:05:08.560 "large_pool_count": 1024, 00:05:08.560 "small_bufsize": 8192, 00:05:08.560 "large_bufsize": 135168 00:05:08.560 } 00:05:08.560 } 00:05:08.560 ] 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "subsystem": "sock", 00:05:08.560 "config": [ 00:05:08.560 { 00:05:08.560 "method": "sock_set_default_impl", 00:05:08.560 "params": { 00:05:08.560 "impl_name": "posix" 00:05:08.560 } 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "method": "sock_impl_set_options", 00:05:08.560 "params": { 00:05:08.560 "impl_name": "ssl", 00:05:08.560 "recv_buf_size": 4096, 00:05:08.560 "send_buf_size": 4096, 00:05:08.560 "enable_recv_pipe": true, 00:05:08.560 "enable_quickack": false, 00:05:08.560 "enable_placement_id": 0, 00:05:08.560 "enable_zerocopy_send_server": true, 00:05:08.560 "enable_zerocopy_send_client": false, 00:05:08.560 "zerocopy_threshold": 0, 00:05:08.560 "tls_version": 0, 00:05:08.560 "enable_ktls": false 00:05:08.560 } 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "method": "sock_impl_set_options", 00:05:08.560 "params": { 00:05:08.560 "impl_name": "posix", 00:05:08.560 "recv_buf_size": 2097152, 00:05:08.560 "send_buf_size": 2097152, 00:05:08.560 "enable_recv_pipe": true, 00:05:08.560 "enable_quickack": false, 00:05:08.560 "enable_placement_id": 0, 00:05:08.560 "enable_zerocopy_send_server": true, 00:05:08.560 "enable_zerocopy_send_client": false, 00:05:08.560 "zerocopy_threshold": 0, 00:05:08.560 "tls_version": 0, 00:05:08.560 "enable_ktls": false 00:05:08.560 } 00:05:08.560 } 00:05:08.560 ] 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "subsystem": "vmd", 00:05:08.560 "config": [] 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "subsystem": "accel", 00:05:08.560 "config": [ 00:05:08.560 { 00:05:08.560 "method": "accel_set_options", 00:05:08.560 "params": { 00:05:08.560 "small_cache_size": 128, 00:05:08.560 "large_cache_size": 16, 00:05:08.560 "task_count": 2048, 00:05:08.560 "sequence_count": 2048, 00:05:08.560 "buf_count": 2048 00:05:08.560 } 00:05:08.560 } 00:05:08.560 ] 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "subsystem": "bdev", 00:05:08.560 "config": [ 00:05:08.560 { 00:05:08.560 "method": "bdev_set_options", 00:05:08.560 "params": { 00:05:08.560 "bdev_io_pool_size": 65535, 00:05:08.560 "bdev_io_cache_size": 256, 00:05:08.560 "bdev_auto_examine": true, 00:05:08.560 "iobuf_small_cache_size": 128, 00:05:08.560 "iobuf_large_cache_size": 16 00:05:08.560 } 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "method": "bdev_raid_set_options", 00:05:08.560 "params": { 00:05:08.560 "process_window_size_kb": 1024, 00:05:08.560 "process_max_bandwidth_mb_sec": 0 00:05:08.560 } 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "method": "bdev_iscsi_set_options", 00:05:08.560 "params": { 00:05:08.560 "timeout_sec": 30 00:05:08.560 } 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "method": "bdev_nvme_set_options", 00:05:08.560 "params": { 00:05:08.560 "action_on_timeout": "none", 00:05:08.560 "timeout_us": 0, 00:05:08.560 "timeout_admin_us": 0, 00:05:08.560 "keep_alive_timeout_ms": 10000, 00:05:08.560 "arbitration_burst": 0, 00:05:08.560 "low_priority_weight": 0, 00:05:08.560 "medium_priority_weight": 0, 00:05:08.560 "high_priority_weight": 0, 00:05:08.560 "nvme_adminq_poll_period_us": 10000, 00:05:08.560 "nvme_ioq_poll_period_us": 0, 00:05:08.560 "io_queue_requests": 0, 00:05:08.560 "delay_cmd_submit": true, 00:05:08.560 "transport_retry_count": 4, 00:05:08.560 "bdev_retry_count": 3, 00:05:08.560 "transport_ack_timeout": 0, 00:05:08.560 "ctrlr_loss_timeout_sec": 0, 00:05:08.560 "reconnect_delay_sec": 0, 00:05:08.560 "fast_io_fail_timeout_sec": 0, 00:05:08.560 "disable_auto_failback": false, 00:05:08.560 "generate_uuids": false, 00:05:08.560 "transport_tos": 0, 00:05:08.560 "nvme_error_stat": false, 00:05:08.560 "rdma_srq_size": 0, 00:05:08.560 "io_path_stat": false, 00:05:08.560 "allow_accel_sequence": false, 00:05:08.560 "rdma_max_cq_size": 0, 00:05:08.560 "rdma_cm_event_timeout_ms": 0, 00:05:08.560 "dhchap_digests": [ 00:05:08.560 "sha256", 00:05:08.560 "sha384", 00:05:08.560 "sha512" 00:05:08.560 ], 00:05:08.560 "dhchap_dhgroups": [ 00:05:08.560 "null", 00:05:08.560 "ffdhe2048", 00:05:08.560 "ffdhe3072", 00:05:08.560 "ffdhe4096", 00:05:08.560 "ffdhe6144", 00:05:08.560 "ffdhe8192" 00:05:08.560 ] 00:05:08.560 } 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "method": "bdev_nvme_set_hotplug", 00:05:08.560 "params": { 00:05:08.560 "period_us": 100000, 00:05:08.560 "enable": false 00:05:08.560 } 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "method": "bdev_wait_for_examine" 00:05:08.560 } 00:05:08.560 ] 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "subsystem": "scsi", 00:05:08.560 "config": null 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "subsystem": "scheduler", 00:05:08.560 "config": [ 00:05:08.560 { 00:05:08.560 "method": "framework_set_scheduler", 00:05:08.560 "params": { 00:05:08.560 "name": "static" 00:05:08.560 } 00:05:08.560 } 00:05:08.560 ] 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "subsystem": "vhost_scsi", 00:05:08.560 "config": [] 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "subsystem": "vhost_blk", 00:05:08.560 "config": [] 00:05:08.560 }, 00:05:08.560 { 00:05:08.560 "subsystem": "ublk", 00:05:08.560 "config": [] 00:05:08.561 }, 00:05:08.561 { 00:05:08.561 "subsystem": "nbd", 00:05:08.561 "config": [] 00:05:08.561 }, 00:05:08.561 { 00:05:08.561 "subsystem": "nvmf", 00:05:08.561 "config": [ 00:05:08.561 { 00:05:08.561 "method": "nvmf_set_config", 00:05:08.561 "params": { 00:05:08.561 "discovery_filter": "match_any", 00:05:08.561 "admin_cmd_passthru": { 00:05:08.561 "identify_ctrlr": false 00:05:08.561 } 00:05:08.561 } 00:05:08.561 }, 00:05:08.561 { 00:05:08.561 "method": "nvmf_set_max_subsystems", 00:05:08.561 "params": { 00:05:08.561 "max_subsystems": 1024 00:05:08.561 } 00:05:08.561 }, 00:05:08.561 { 00:05:08.561 "method": "nvmf_set_crdt", 00:05:08.561 "params": { 00:05:08.561 "crdt1": 0, 00:05:08.561 "crdt2": 0, 00:05:08.561 "crdt3": 0 00:05:08.561 } 00:05:08.561 }, 00:05:08.561 { 00:05:08.561 "method": "nvmf_create_transport", 00:05:08.561 "params": { 00:05:08.561 "trtype": "TCP", 00:05:08.561 "max_queue_depth": 128, 00:05:08.561 "max_io_qpairs_per_ctrlr": 127, 00:05:08.561 "in_capsule_data_size": 4096, 00:05:08.561 "max_io_size": 131072, 00:05:08.561 "io_unit_size": 131072, 00:05:08.561 "max_aq_depth": 128, 00:05:08.561 "num_shared_buffers": 511, 00:05:08.561 "buf_cache_size": 4294967295, 00:05:08.561 "dif_insert_or_strip": false, 00:05:08.561 "zcopy": false, 00:05:08.561 "c2h_success": true, 00:05:08.561 "sock_priority": 0, 00:05:08.561 "abort_timeout_sec": 1, 00:05:08.561 "ack_timeout": 0, 00:05:08.561 "data_wr_pool_size": 0 00:05:08.561 } 00:05:08.561 } 00:05:08.561 ] 00:05:08.561 }, 00:05:08.561 { 00:05:08.561 "subsystem": "iscsi", 00:05:08.561 "config": [ 00:05:08.561 { 00:05:08.561 "method": "iscsi_set_options", 00:05:08.561 "params": { 00:05:08.561 "node_base": "iqn.2016-06.io.spdk", 00:05:08.561 "max_sessions": 128, 00:05:08.561 "max_connections_per_session": 2, 00:05:08.561 "max_queue_depth": 64, 00:05:08.561 "default_time2wait": 2, 00:05:08.561 "default_time2retain": 20, 00:05:08.561 "first_burst_length": 8192, 00:05:08.561 "immediate_data": true, 00:05:08.561 "allow_duplicated_isid": false, 00:05:08.561 "error_recovery_level": 0, 00:05:08.561 "nop_timeout": 60, 00:05:08.561 "nop_in_interval": 30, 00:05:08.561 "disable_chap": false, 00:05:08.561 "require_chap": false, 00:05:08.561 "mutual_chap": false, 00:05:08.561 "chap_group": 0, 00:05:08.561 "max_large_datain_per_connection": 64, 00:05:08.561 "max_r2t_per_connection": 4, 00:05:08.561 "pdu_pool_size": 36864, 00:05:08.561 "immediate_data_pool_size": 16384, 00:05:08.561 "data_out_pool_size": 2048 00:05:08.561 } 00:05:08.561 } 00:05:08.561 ] 00:05:08.561 } 00:05:08.561 ] 00:05:08.561 } 00:05:08.561 18:05:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:08.561 18:05:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1333360 00:05:08.561 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1333360 ']' 00:05:08.561 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1333360 00:05:08.561 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:08.561 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.561 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1333360 00:05:08.561 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.561 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.561 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1333360' 00:05:08.561 killing process with pid 1333360 00:05:08.561 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1333360 00:05:08.561 18:05:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1333360 00:05:08.819 18:05:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1333500 00:05:08.819 18:05:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:08.819 18:05:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:14.081 18:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1333500 00:05:14.081 18:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1333500 ']' 00:05:14.082 18:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1333500 00:05:14.082 18:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:14.082 18:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.082 18:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1333500 00:05:14.082 18:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.082 18:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.082 18:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1333500' 00:05:14.082 killing process with pid 1333500 00:05:14.082 18:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1333500 00:05:14.082 18:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1333500 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:14.340 00:05:14.340 real 0m6.508s 00:05:14.340 user 0m6.116s 00:05:14.340 sys 0m0.666s 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.340 ************************************ 00:05:14.340 END TEST skip_rpc_with_json 00:05:14.340 ************************************ 00:05:14.340 18:05:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:14.340 18:05:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.340 18:05:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.340 18:05:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.340 ************************************ 00:05:14.340 START TEST skip_rpc_with_delay 00:05:14.340 ************************************ 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:14.340 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.340 [2024-07-26 18:05:40.481729] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:14.340 [2024-07-26 18:05:40.481846] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:14.602 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:14.602 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:14.602 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:14.602 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:14.602 00:05:14.602 real 0m0.071s 00:05:14.602 user 0m0.047s 00:05:14.602 sys 0m0.024s 00:05:14.602 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.602 18:05:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:14.602 ************************************ 00:05:14.602 END TEST skip_rpc_with_delay 00:05:14.602 ************************************ 00:05:14.602 18:05:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:14.602 18:05:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:14.602 18:05:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:14.602 18:05:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.602 18:05:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.602 18:05:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.602 ************************************ 00:05:14.602 START TEST exit_on_failed_rpc_init 00:05:14.602 ************************************ 00:05:14.602 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:14.602 18:05:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1334218 00:05:14.602 18:05:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.602 18:05:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1334218 00:05:14.602 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1334218 ']' 00:05:14.602 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.602 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.602 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.602 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.602 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.602 [2024-07-26 18:05:40.595909] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:14.602 [2024-07-26 18:05:40.596013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334218 ] 00:05:14.602 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.602 [2024-07-26 18:05:40.627914] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:14.602 [2024-07-26 18:05:40.653704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.602 [2024-07-26 18:05:40.743230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.884 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.884 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:14.884 18:05:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.884 18:05:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.884 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:14.884 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.884 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.885 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.885 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.885 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.885 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.885 18:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.885 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.885 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:14.885 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.145 [2024-07-26 18:05:41.053558] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:15.145 [2024-07-26 18:05:41.053652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334228 ] 00:05:15.145 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.145 [2024-07-26 18:05:41.087898] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:15.145 [2024-07-26 18:05:41.119074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.145 [2024-07-26 18:05:41.212436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.145 [2024-07-26 18:05:41.212568] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:15.145 [2024-07-26 18:05:41.212599] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:15.145 [2024-07-26 18:05:41.212621] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1334218 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1334218 ']' 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1334218 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1334218 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1334218' 00:05:15.403 killing process with pid 1334218 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1334218 00:05:15.403 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1334218 00:05:15.661 00:05:15.661 real 0m1.200s 00:05:15.661 user 0m1.307s 00:05:15.661 sys 0m0.458s 00:05:15.661 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.661 18:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.661 ************************************ 00:05:15.661 END TEST exit_on_failed_rpc_init 00:05:15.661 ************************************ 00:05:15.661 18:05:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:15.661 00:05:15.661 real 0m13.483s 00:05:15.661 user 0m12.706s 00:05:15.661 sys 0m1.634s 00:05:15.661 18:05:41 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.661 18:05:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.661 ************************************ 00:05:15.661 END TEST skip_rpc 00:05:15.661 ************************************ 00:05:15.661 18:05:41 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:15.661 18:05:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.661 18:05:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.661 18:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:15.920 ************************************ 00:05:15.920 START TEST rpc_client 00:05:15.920 ************************************ 00:05:15.920 18:05:41 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:15.920 * Looking for test storage... 00:05:15.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:15.920 18:05:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:15.920 OK 00:05:15.920 18:05:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:15.920 00:05:15.920 real 0m0.068s 00:05:15.920 user 0m0.022s 00:05:15.920 sys 0m0.051s 00:05:15.920 18:05:41 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.920 18:05:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:15.920 ************************************ 00:05:15.920 END TEST rpc_client 00:05:15.920 ************************************ 00:05:15.920 18:05:41 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:15.920 18:05:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.920 18:05:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.920 18:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:15.920 ************************************ 00:05:15.920 START TEST json_config 00:05:15.920 ************************************ 00:05:15.920 18:05:41 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:15.920 18:05:41 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.920 18:05:41 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.920 18:05:41 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.920 18:05:41 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.920 18:05:41 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.920 18:05:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.921 18:05:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.921 18:05:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.921 18:05:41 json_config -- paths/export.sh@5 -- # export PATH 00:05:15.921 18:05:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.921 18:05:41 json_config -- nvmf/common.sh@47 -- # : 0 00:05:15.921 18:05:41 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:15.921 18:05:41 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:15.921 18:05:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.921 18:05:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.921 18:05:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.921 18:05:41 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:15.921 18:05:41 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:15.921 18:05:41 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:15.921 INFO: JSON configuration test init 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:15.921 18:05:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.921 18:05:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:15.921 18:05:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.921 18:05:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.921 18:05:41 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:15.921 18:05:41 json_config -- json_config/common.sh@9 -- # local app=target 00:05:15.921 18:05:41 json_config -- json_config/common.sh@10 -- # shift 00:05:15.921 18:05:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.921 18:05:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.921 18:05:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.921 18:05:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.921 18:05:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.921 18:05:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1334467 00:05:15.921 18:05:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:15.921 18:05:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.921 Waiting for target to run... 00:05:15.921 18:05:41 json_config -- json_config/common.sh@25 -- # waitforlisten 1334467 /var/tmp/spdk_tgt.sock 00:05:15.921 18:05:41 json_config -- common/autotest_common.sh@831 -- # '[' -z 1334467 ']' 00:05:15.921 18:05:41 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.921 18:05:41 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.921 18:05:41 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.921 18:05:41 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.921 18:05:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.921 [2024-07-26 18:05:42.034168] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:15.921 [2024-07-26 18:05:42.034283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334467 ] 00:05:15.921 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.487 [2024-07-26 18:05:42.354767] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:16.487 [2024-07-26 18:05:42.388113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.487 [2024-07-26 18:05:42.451716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.052 18:05:42 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.052 18:05:42 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:17.052 18:05:42 json_config -- json_config/common.sh@26 -- # echo '' 00:05:17.052 00:05:17.052 18:05:42 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:17.052 18:05:42 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:17.052 18:05:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:17.052 18:05:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.052 18:05:42 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:17.052 18:05:42 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:17.053 18:05:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:17.053 18:05:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.053 18:05:43 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:17.053 18:05:43 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:17.053 18:05:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:20.333 18:05:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.333 18:05:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:20.333 18:05:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@51 -- # sort 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:20.333 18:05:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.333 18:05:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:20.333 18:05:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.333 18:05:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:20.333 18:05:46 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:20.333 18:05:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:20.591 MallocForNvmf0 00:05:20.591 18:05:46 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:20.591 18:05:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:20.849 MallocForNvmf1 00:05:20.849 18:05:46 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:20.849 18:05:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:21.106 [2024-07-26 18:05:47.157435] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.106 18:05:47 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:21.106 18:05:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:21.363 18:05:47 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:21.363 18:05:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:21.621 18:05:47 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:21.621 18:05:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:21.879 18:05:47 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:21.879 18:05:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:22.136 [2024-07-26 18:05:48.128663] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:22.136 18:05:48 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:22.136 18:05:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.136 18:05:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.136 18:05:48 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:22.136 18:05:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.136 18:05:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.136 18:05:48 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:22.136 18:05:48 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:22.136 18:05:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:22.393 MallocBdevForConfigChangeCheck 00:05:22.393 18:05:48 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:22.393 18:05:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.393 18:05:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.393 18:05:48 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:22.393 18:05:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.958 18:05:48 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:22.958 INFO: shutting down applications... 00:05:22.958 18:05:48 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:22.958 18:05:48 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:22.958 18:05:48 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:22.958 18:05:48 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:24.329 Calling clear_iscsi_subsystem 00:05:24.329 Calling clear_nvmf_subsystem 00:05:24.329 Calling clear_nbd_subsystem 00:05:24.329 Calling clear_ublk_subsystem 00:05:24.329 Calling clear_vhost_blk_subsystem 00:05:24.329 Calling clear_vhost_scsi_subsystem 00:05:24.329 Calling clear_bdev_subsystem 00:05:24.329 18:05:50 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:24.329 18:05:50 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:24.329 18:05:50 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:24.329 18:05:50 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.329 18:05:50 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:24.329 18:05:50 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:24.895 18:05:50 json_config -- json_config/json_config.sh@349 -- # break 00:05:24.895 18:05:50 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:24.895 18:05:50 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:24.895 18:05:50 json_config -- json_config/common.sh@31 -- # local app=target 00:05:24.895 18:05:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:24.895 18:05:50 json_config -- json_config/common.sh@35 -- # [[ -n 1334467 ]] 00:05:24.895 18:05:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1334467 00:05:24.895 18:05:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:24.895 18:05:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.895 18:05:50 json_config -- json_config/common.sh@41 -- # kill -0 1334467 00:05:24.895 18:05:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.462 18:05:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.462 18:05:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.462 18:05:51 json_config -- json_config/common.sh@41 -- # kill -0 1334467 00:05:25.462 18:05:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:25.462 18:05:51 json_config -- json_config/common.sh@43 -- # break 00:05:25.462 18:05:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:25.462 18:05:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:25.462 SPDK target shutdown done 00:05:25.462 18:05:51 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:25.462 INFO: relaunching applications... 00:05:25.462 18:05:51 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.462 18:05:51 json_config -- json_config/common.sh@9 -- # local app=target 00:05:25.462 18:05:51 json_config -- json_config/common.sh@10 -- # shift 00:05:25.462 18:05:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:25.462 18:05:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:25.462 18:05:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:25.462 18:05:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.462 18:05:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.462 18:05:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1335781 00:05:25.462 18:05:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.462 18:05:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:25.462 Waiting for target to run... 00:05:25.462 18:05:51 json_config -- json_config/common.sh@25 -- # waitforlisten 1335781 /var/tmp/spdk_tgt.sock 00:05:25.462 18:05:51 json_config -- common/autotest_common.sh@831 -- # '[' -z 1335781 ']' 00:05:25.462 18:05:51 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.462 18:05:51 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.462 18:05:51 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.462 18:05:51 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.462 18:05:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.462 [2024-07-26 18:05:51.386099] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:25.462 [2024-07-26 18:05:51.386183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335781 ] 00:05:25.462 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.030 [2024-07-26 18:05:51.878118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:26.030 [2024-07-26 18:05:51.911882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.030 [2024-07-26 18:05:51.993939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.311 [2024-07-26 18:05:55.026882] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.311 [2024-07-26 18:05:55.059388] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:29.877 18:05:55 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.877 18:05:55 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:29.877 18:05:55 json_config -- json_config/common.sh@26 -- # echo '' 00:05:29.877 00:05:29.877 18:05:55 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:29.877 18:05:55 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:29.877 INFO: Checking if target configuration is the same... 00:05:29.877 18:05:55 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.877 18:05:55 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:29.877 18:05:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.877 + '[' 2 -ne 2 ']' 00:05:29.877 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:29.877 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:29.877 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:29.877 +++ basename /dev/fd/62 00:05:29.877 ++ mktemp /tmp/62.XXX 00:05:29.877 + tmp_file_1=/tmp/62.cCN 00:05:29.877 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.877 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:29.877 + tmp_file_2=/tmp/spdk_tgt_config.json.96U 00:05:29.877 + ret=0 00:05:29.877 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:30.135 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:30.135 + diff -u /tmp/62.cCN /tmp/spdk_tgt_config.json.96U 00:05:30.135 + echo 'INFO: JSON config files are the same' 00:05:30.135 INFO: JSON config files are the same 00:05:30.135 + rm /tmp/62.cCN /tmp/spdk_tgt_config.json.96U 00:05:30.135 + exit 0 00:05:30.135 18:05:56 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:30.135 18:05:56 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:30.135 INFO: changing configuration and checking if this can be detected... 00:05:30.135 18:05:56 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:30.135 18:05:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:30.393 18:05:56 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.393 18:05:56 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:30.393 18:05:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.393 + '[' 2 -ne 2 ']' 00:05:30.393 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:30.393 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:30.393 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:30.393 +++ basename /dev/fd/62 00:05:30.393 ++ mktemp /tmp/62.XXX 00:05:30.393 + tmp_file_1=/tmp/62.ptp 00:05:30.393 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.393 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:30.393 + tmp_file_2=/tmp/spdk_tgt_config.json.G6Z 00:05:30.393 + ret=0 00:05:30.393 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:30.959 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:30.959 + diff -u /tmp/62.ptp /tmp/spdk_tgt_config.json.G6Z 00:05:30.959 + ret=1 00:05:30.959 + echo '=== Start of file: /tmp/62.ptp ===' 00:05:30.959 + cat /tmp/62.ptp 00:05:30.959 + echo '=== End of file: /tmp/62.ptp ===' 00:05:30.959 + echo '' 00:05:30.959 + echo '=== Start of file: /tmp/spdk_tgt_config.json.G6Z ===' 00:05:30.959 + cat /tmp/spdk_tgt_config.json.G6Z 00:05:30.959 + echo '=== End of file: /tmp/spdk_tgt_config.json.G6Z ===' 00:05:30.959 + echo '' 00:05:30.959 + rm /tmp/62.ptp /tmp/spdk_tgt_config.json.G6Z 00:05:30.959 + exit 1 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:30.959 INFO: configuration change detected. 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@321 -- # [[ -n 1335781 ]] 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.959 18:05:56 json_config -- json_config/json_config.sh@327 -- # killprocess 1335781 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@950 -- # '[' -z 1335781 ']' 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@954 -- # kill -0 1335781 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@955 -- # uname 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1335781 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1335781' 00:05:30.959 killing process with pid 1335781 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@969 -- # kill 1335781 00:05:30.959 18:05:56 json_config -- common/autotest_common.sh@974 -- # wait 1335781 00:05:32.861 18:05:58 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.861 18:05:58 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:32.861 18:05:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:32.861 18:05:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.861 18:05:58 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:32.861 18:05:58 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:32.861 INFO: Success 00:05:32.861 00:05:32.861 real 0m16.685s 00:05:32.861 user 0m18.604s 00:05:32.861 sys 0m2.076s 00:05:32.861 18:05:58 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.861 18:05:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.861 ************************************ 00:05:32.861 END TEST json_config 00:05:32.861 ************************************ 00:05:32.861 18:05:58 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:32.861 18:05:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.861 18:05:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.861 18:05:58 -- common/autotest_common.sh@10 -- # set +x 00:05:32.861 ************************************ 00:05:32.861 START TEST json_config_extra_key 00:05:32.861 ************************************ 00:05:32.861 18:05:58 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:32.861 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:32.862 18:05:58 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.862 18:05:58 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.862 18:05:58 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.862 18:05:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.862 18:05:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.862 18:05:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.862 18:05:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:32.862 18:05:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:32.862 18:05:58 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:32.862 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:32.862 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:32.862 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:32.862 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:32.862 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:32.862 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:32.862 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:32.862 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:32.862 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:32.862 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:32.862 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:32.862 INFO: launching applications... 00:05:32.862 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:32.862 18:05:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:32.862 18:05:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:32.862 18:05:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:32.862 18:05:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:32.862 18:05:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:32.862 18:05:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.862 18:05:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.862 18:05:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1336711 00:05:32.862 18:05:58 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:32.862 18:05:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:32.862 Waiting for target to run... 00:05:32.862 18:05:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1336711 /var/tmp/spdk_tgt.sock 00:05:32.862 18:05:58 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1336711 ']' 00:05:32.862 18:05:58 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.862 18:05:58 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.862 18:05:58 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.862 18:05:58 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.862 18:05:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:32.862 [2024-07-26 18:05:58.770233] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:32.862 [2024-07-26 18:05:58.770339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336711 ] 00:05:32.862 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.153 [2024-07-26 18:05:59.101482] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.153 [2024-07-26 18:05:59.135748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.153 [2024-07-26 18:05:59.199157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.719 18:05:59 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.719 18:05:59 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:33.719 18:05:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:33.719 00:05:33.719 18:05:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:33.719 INFO: shutting down applications... 00:05:33.719 18:05:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:33.719 18:05:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:33.719 18:05:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:33.719 18:05:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1336711 ]] 00:05:33.719 18:05:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1336711 00:05:33.719 18:05:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:33.719 18:05:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.719 18:05:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1336711 00:05:33.719 18:05:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.286 18:06:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.286 18:06:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.286 18:06:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1336711 00:05:34.286 18:06:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.286 18:06:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:34.286 18:06:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.286 18:06:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.286 SPDK target shutdown done 00:05:34.286 18:06:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:34.286 Success 00:05:34.286 00:05:34.286 real 0m1.536s 00:05:34.286 user 0m1.468s 00:05:34.286 sys 0m0.433s 00:05:34.286 18:06:00 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.286 18:06:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:34.286 ************************************ 00:05:34.286 END TEST json_config_extra_key 00:05:34.286 ************************************ 00:05:34.286 18:06:00 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.286 18:06:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.286 18:06:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.286 18:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:34.286 ************************************ 00:05:34.286 START TEST alias_rpc 00:05:34.286 ************************************ 00:05:34.286 18:06:00 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.286 * Looking for test storage... 00:05:34.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:34.286 18:06:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:34.286 18:06:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1337041 00:05:34.286 18:06:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.286 18:06:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1337041 00:05:34.286 18:06:00 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1337041 ']' 00:05:34.286 18:06:00 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.286 18:06:00 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.286 18:06:00 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.286 18:06:00 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.286 18:06:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.286 [2024-07-26 18:06:00.349446] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:34.286 [2024-07-26 18:06:00.349526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337041 ] 00:05:34.286 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.286 [2024-07-26 18:06:00.383947] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:34.286 [2024-07-26 18:06:00.412152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.544 [2024-07-26 18:06:00.501998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.803 18:06:00 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.803 18:06:00 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:34.803 18:06:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:35.061 18:06:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1337041 00:05:35.061 18:06:01 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1337041 ']' 00:05:35.061 18:06:01 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1337041 00:05:35.061 18:06:01 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:35.061 18:06:01 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.061 18:06:01 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1337041 00:05:35.061 18:06:01 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.061 18:06:01 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.061 18:06:01 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1337041' 00:05:35.061 killing process with pid 1337041 00:05:35.061 18:06:01 alias_rpc -- common/autotest_common.sh@969 -- # kill 1337041 00:05:35.061 18:06:01 alias_rpc -- common/autotest_common.sh@974 -- # wait 1337041 00:05:35.319 00:05:35.319 real 0m1.196s 00:05:35.319 user 0m1.303s 00:05:35.319 sys 0m0.406s 00:05:35.319 18:06:01 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.319 18:06:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.319 ************************************ 00:05:35.319 END TEST alias_rpc 00:05:35.319 ************************************ 00:05:35.319 18:06:01 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:35.319 18:06:01 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:35.319 18:06:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.319 18:06:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.319 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:05:35.578 ************************************ 00:05:35.578 START TEST spdkcli_tcp 00:05:35.578 ************************************ 00:05:35.578 18:06:01 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:35.578 * Looking for test storage... 00:05:35.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:35.578 18:06:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:35.578 18:06:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:35.578 18:06:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:35.578 18:06:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:35.578 18:06:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:35.578 18:06:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:35.578 18:06:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:35.578 18:06:01 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.578 18:06:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.578 18:06:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1337290 00:05:35.578 18:06:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:35.578 18:06:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1337290 00:05:35.578 18:06:01 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1337290 ']' 00:05:35.578 18:06:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.578 18:06:01 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.578 18:06:01 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.578 18:06:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.578 18:06:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.578 [2024-07-26 18:06:01.590840] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:35.578 [2024-07-26 18:06:01.590925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337290 ] 00:05:35.578 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.578 [2024-07-26 18:06:01.624026] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:35.578 [2024-07-26 18:06:01.651743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.837 [2024-07-26 18:06:01.747081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.837 [2024-07-26 18:06:01.747086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.095 18:06:01 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.095 18:06:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:36.095 18:06:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1337326 00:05:36.095 18:06:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:36.095 18:06:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:36.095 [ 00:05:36.095 "bdev_malloc_delete", 00:05:36.095 "bdev_malloc_create", 00:05:36.095 "bdev_null_resize", 00:05:36.095 "bdev_null_delete", 00:05:36.095 "bdev_null_create", 00:05:36.095 "bdev_nvme_cuse_unregister", 00:05:36.095 "bdev_nvme_cuse_register", 00:05:36.095 "bdev_opal_new_user", 00:05:36.095 "bdev_opal_set_lock_state", 00:05:36.095 "bdev_opal_delete", 00:05:36.095 "bdev_opal_get_info", 00:05:36.095 "bdev_opal_create", 00:05:36.095 "bdev_nvme_opal_revert", 00:05:36.095 "bdev_nvme_opal_init", 00:05:36.095 "bdev_nvme_send_cmd", 00:05:36.095 "bdev_nvme_get_path_iostat", 00:05:36.095 "bdev_nvme_get_mdns_discovery_info", 00:05:36.095 "bdev_nvme_stop_mdns_discovery", 00:05:36.095 "bdev_nvme_start_mdns_discovery", 00:05:36.095 "bdev_nvme_set_multipath_policy", 00:05:36.095 "bdev_nvme_set_preferred_path", 00:05:36.095 "bdev_nvme_get_io_paths", 00:05:36.095 "bdev_nvme_remove_error_injection", 00:05:36.095 "bdev_nvme_add_error_injection", 00:05:36.095 "bdev_nvme_get_discovery_info", 00:05:36.095 "bdev_nvme_stop_discovery", 00:05:36.095 "bdev_nvme_start_discovery", 00:05:36.095 "bdev_nvme_get_controller_health_info", 00:05:36.095 "bdev_nvme_disable_controller", 00:05:36.095 "bdev_nvme_enable_controller", 00:05:36.095 "bdev_nvme_reset_controller", 00:05:36.095 "bdev_nvme_get_transport_statistics", 00:05:36.095 "bdev_nvme_apply_firmware", 00:05:36.095 "bdev_nvme_detach_controller", 00:05:36.095 "bdev_nvme_get_controllers", 00:05:36.095 "bdev_nvme_attach_controller", 00:05:36.095 "bdev_nvme_set_hotplug", 00:05:36.095 "bdev_nvme_set_options", 00:05:36.095 "bdev_passthru_delete", 00:05:36.095 "bdev_passthru_create", 00:05:36.095 "bdev_lvol_set_parent_bdev", 00:05:36.095 "bdev_lvol_set_parent", 00:05:36.095 "bdev_lvol_check_shallow_copy", 00:05:36.095 "bdev_lvol_start_shallow_copy", 00:05:36.095 "bdev_lvol_grow_lvstore", 00:05:36.095 "bdev_lvol_get_lvols", 00:05:36.095 "bdev_lvol_get_lvstores", 00:05:36.095 "bdev_lvol_delete", 00:05:36.095 "bdev_lvol_set_read_only", 00:05:36.095 "bdev_lvol_resize", 00:05:36.095 "bdev_lvol_decouple_parent", 00:05:36.095 "bdev_lvol_inflate", 00:05:36.095 "bdev_lvol_rename", 00:05:36.095 "bdev_lvol_clone_bdev", 00:05:36.095 "bdev_lvol_clone", 00:05:36.095 "bdev_lvol_snapshot", 00:05:36.095 "bdev_lvol_create", 00:05:36.095 "bdev_lvol_delete_lvstore", 00:05:36.095 "bdev_lvol_rename_lvstore", 00:05:36.095 "bdev_lvol_create_lvstore", 00:05:36.095 "bdev_raid_set_options", 00:05:36.095 "bdev_raid_remove_base_bdev", 00:05:36.095 "bdev_raid_add_base_bdev", 00:05:36.095 "bdev_raid_delete", 00:05:36.095 "bdev_raid_create", 00:05:36.095 "bdev_raid_get_bdevs", 00:05:36.095 "bdev_error_inject_error", 00:05:36.095 "bdev_error_delete", 00:05:36.095 "bdev_error_create", 00:05:36.095 "bdev_split_delete", 00:05:36.095 "bdev_split_create", 00:05:36.095 "bdev_delay_delete", 00:05:36.095 "bdev_delay_create", 00:05:36.095 "bdev_delay_update_latency", 00:05:36.095 "bdev_zone_block_delete", 00:05:36.095 "bdev_zone_block_create", 00:05:36.095 "blobfs_create", 00:05:36.095 "blobfs_detect", 00:05:36.095 "blobfs_set_cache_size", 00:05:36.095 "bdev_aio_delete", 00:05:36.095 "bdev_aio_rescan", 00:05:36.095 "bdev_aio_create", 00:05:36.095 "bdev_ftl_set_property", 00:05:36.095 "bdev_ftl_get_properties", 00:05:36.095 "bdev_ftl_get_stats", 00:05:36.095 "bdev_ftl_unmap", 00:05:36.095 "bdev_ftl_unload", 00:05:36.095 "bdev_ftl_delete", 00:05:36.095 "bdev_ftl_load", 00:05:36.095 "bdev_ftl_create", 00:05:36.095 "bdev_virtio_attach_controller", 00:05:36.095 "bdev_virtio_scsi_get_devices", 00:05:36.095 "bdev_virtio_detach_controller", 00:05:36.095 "bdev_virtio_blk_set_hotplug", 00:05:36.095 "bdev_iscsi_delete", 00:05:36.095 "bdev_iscsi_create", 00:05:36.095 "bdev_iscsi_set_options", 00:05:36.095 "accel_error_inject_error", 00:05:36.095 "ioat_scan_accel_module", 00:05:36.095 "dsa_scan_accel_module", 00:05:36.095 "iaa_scan_accel_module", 00:05:36.095 "vfu_virtio_create_scsi_endpoint", 00:05:36.095 "vfu_virtio_scsi_remove_target", 00:05:36.095 "vfu_virtio_scsi_add_target", 00:05:36.095 "vfu_virtio_create_blk_endpoint", 00:05:36.095 "vfu_virtio_delete_endpoint", 00:05:36.095 "keyring_file_remove_key", 00:05:36.095 "keyring_file_add_key", 00:05:36.095 "keyring_linux_set_options", 00:05:36.095 "iscsi_get_histogram", 00:05:36.095 "iscsi_enable_histogram", 00:05:36.095 "iscsi_set_options", 00:05:36.095 "iscsi_get_auth_groups", 00:05:36.095 "iscsi_auth_group_remove_secret", 00:05:36.095 "iscsi_auth_group_add_secret", 00:05:36.095 "iscsi_delete_auth_group", 00:05:36.095 "iscsi_create_auth_group", 00:05:36.095 "iscsi_set_discovery_auth", 00:05:36.095 "iscsi_get_options", 00:05:36.096 "iscsi_target_node_request_logout", 00:05:36.096 "iscsi_target_node_set_redirect", 00:05:36.096 "iscsi_target_node_set_auth", 00:05:36.096 "iscsi_target_node_add_lun", 00:05:36.096 "iscsi_get_stats", 00:05:36.096 "iscsi_get_connections", 00:05:36.096 "iscsi_portal_group_set_auth", 00:05:36.096 "iscsi_start_portal_group", 00:05:36.096 "iscsi_delete_portal_group", 00:05:36.096 "iscsi_create_portal_group", 00:05:36.096 "iscsi_get_portal_groups", 00:05:36.096 "iscsi_delete_target_node", 00:05:36.096 "iscsi_target_node_remove_pg_ig_maps", 00:05:36.096 "iscsi_target_node_add_pg_ig_maps", 00:05:36.096 "iscsi_create_target_node", 00:05:36.096 "iscsi_get_target_nodes", 00:05:36.096 "iscsi_delete_initiator_group", 00:05:36.096 "iscsi_initiator_group_remove_initiators", 00:05:36.096 "iscsi_initiator_group_add_initiators", 00:05:36.096 "iscsi_create_initiator_group", 00:05:36.096 "iscsi_get_initiator_groups", 00:05:36.096 "nvmf_set_crdt", 00:05:36.096 "nvmf_set_config", 00:05:36.096 "nvmf_set_max_subsystems", 00:05:36.096 "nvmf_stop_mdns_prr", 00:05:36.096 "nvmf_publish_mdns_prr", 00:05:36.096 "nvmf_subsystem_get_listeners", 00:05:36.096 "nvmf_subsystem_get_qpairs", 00:05:36.096 "nvmf_subsystem_get_controllers", 00:05:36.096 "nvmf_get_stats", 00:05:36.096 "nvmf_get_transports", 00:05:36.096 "nvmf_create_transport", 00:05:36.096 "nvmf_get_targets", 00:05:36.096 "nvmf_delete_target", 00:05:36.096 "nvmf_create_target", 00:05:36.096 "nvmf_subsystem_allow_any_host", 00:05:36.096 "nvmf_subsystem_remove_host", 00:05:36.096 "nvmf_subsystem_add_host", 00:05:36.096 "nvmf_ns_remove_host", 00:05:36.096 "nvmf_ns_add_host", 00:05:36.096 "nvmf_subsystem_remove_ns", 00:05:36.096 "nvmf_subsystem_add_ns", 00:05:36.096 "nvmf_subsystem_listener_set_ana_state", 00:05:36.096 "nvmf_discovery_get_referrals", 00:05:36.096 "nvmf_discovery_remove_referral", 00:05:36.096 "nvmf_discovery_add_referral", 00:05:36.096 "nvmf_subsystem_remove_listener", 00:05:36.096 "nvmf_subsystem_add_listener", 00:05:36.096 "nvmf_delete_subsystem", 00:05:36.096 "nvmf_create_subsystem", 00:05:36.096 "nvmf_get_subsystems", 00:05:36.096 "env_dpdk_get_mem_stats", 00:05:36.096 "nbd_get_disks", 00:05:36.096 "nbd_stop_disk", 00:05:36.096 "nbd_start_disk", 00:05:36.096 "ublk_recover_disk", 00:05:36.096 "ublk_get_disks", 00:05:36.096 "ublk_stop_disk", 00:05:36.096 "ublk_start_disk", 00:05:36.096 "ublk_destroy_target", 00:05:36.096 "ublk_create_target", 00:05:36.096 "virtio_blk_create_transport", 00:05:36.096 "virtio_blk_get_transports", 00:05:36.096 "vhost_controller_set_coalescing", 00:05:36.096 "vhost_get_controllers", 00:05:36.096 "vhost_delete_controller", 00:05:36.096 "vhost_create_blk_controller", 00:05:36.096 "vhost_scsi_controller_remove_target", 00:05:36.096 "vhost_scsi_controller_add_target", 00:05:36.096 "vhost_start_scsi_controller", 00:05:36.096 "vhost_create_scsi_controller", 00:05:36.096 "thread_set_cpumask", 00:05:36.096 "framework_get_governor", 00:05:36.096 "framework_get_scheduler", 00:05:36.096 "framework_set_scheduler", 00:05:36.096 "framework_get_reactors", 00:05:36.096 "thread_get_io_channels", 00:05:36.096 "thread_get_pollers", 00:05:36.096 "thread_get_stats", 00:05:36.096 "framework_monitor_context_switch", 00:05:36.096 "spdk_kill_instance", 00:05:36.096 "log_enable_timestamps", 00:05:36.096 "log_get_flags", 00:05:36.096 "log_clear_flag", 00:05:36.096 "log_set_flag", 00:05:36.096 "log_get_level", 00:05:36.096 "log_set_level", 00:05:36.096 "log_get_print_level", 00:05:36.096 "log_set_print_level", 00:05:36.096 "framework_enable_cpumask_locks", 00:05:36.096 "framework_disable_cpumask_locks", 00:05:36.096 "framework_wait_init", 00:05:36.096 "framework_start_init", 00:05:36.096 "scsi_get_devices", 00:05:36.096 "bdev_get_histogram", 00:05:36.096 "bdev_enable_histogram", 00:05:36.096 "bdev_set_qos_limit", 00:05:36.096 "bdev_set_qd_sampling_period", 00:05:36.096 "bdev_get_bdevs", 00:05:36.096 "bdev_reset_iostat", 00:05:36.096 "bdev_get_iostat", 00:05:36.096 "bdev_examine", 00:05:36.096 "bdev_wait_for_examine", 00:05:36.096 "bdev_set_options", 00:05:36.096 "notify_get_notifications", 00:05:36.096 "notify_get_types", 00:05:36.096 "accel_get_stats", 00:05:36.096 "accel_set_options", 00:05:36.096 "accel_set_driver", 00:05:36.096 "accel_crypto_key_destroy", 00:05:36.096 "accel_crypto_keys_get", 00:05:36.096 "accel_crypto_key_create", 00:05:36.096 "accel_assign_opc", 00:05:36.096 "accel_get_module_info", 00:05:36.096 "accel_get_opc_assignments", 00:05:36.096 "vmd_rescan", 00:05:36.096 "vmd_remove_device", 00:05:36.096 "vmd_enable", 00:05:36.096 "sock_get_default_impl", 00:05:36.096 "sock_set_default_impl", 00:05:36.096 "sock_impl_set_options", 00:05:36.096 "sock_impl_get_options", 00:05:36.096 "iobuf_get_stats", 00:05:36.096 "iobuf_set_options", 00:05:36.096 "keyring_get_keys", 00:05:36.096 "framework_get_pci_devices", 00:05:36.096 "framework_get_config", 00:05:36.096 "framework_get_subsystems", 00:05:36.096 "vfu_tgt_set_base_path", 00:05:36.096 "trace_get_info", 00:05:36.096 "trace_get_tpoint_group_mask", 00:05:36.096 "trace_disable_tpoint_group", 00:05:36.096 "trace_enable_tpoint_group", 00:05:36.096 "trace_clear_tpoint_mask", 00:05:36.096 "trace_set_tpoint_mask", 00:05:36.096 "spdk_get_version", 00:05:36.096 "rpc_get_methods" 00:05:36.096 ] 00:05:36.096 18:06:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:36.096 18:06:02 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.096 18:06:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.354 18:06:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:36.354 18:06:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1337290 00:05:36.354 18:06:02 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1337290 ']' 00:05:36.354 18:06:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1337290 00:05:36.354 18:06:02 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:36.354 18:06:02 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.354 18:06:02 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1337290 00:05:36.354 18:06:02 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.354 18:06:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.354 18:06:02 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1337290' 00:05:36.354 killing process with pid 1337290 00:05:36.354 18:06:02 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1337290 00:05:36.354 18:06:02 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1337290 00:05:36.613 00:05:36.613 real 0m1.196s 00:05:36.613 user 0m2.152s 00:05:36.613 sys 0m0.432s 00:05:36.613 18:06:02 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.613 18:06:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.613 ************************************ 00:05:36.613 END TEST spdkcli_tcp 00:05:36.613 ************************************ 00:05:36.613 18:06:02 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.613 18:06:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.613 18:06:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.613 18:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:36.613 ************************************ 00:05:36.613 START TEST dpdk_mem_utility 00:05:36.613 ************************************ 00:05:36.613 18:06:02 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.871 * Looking for test storage... 00:05:36.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:36.871 18:06:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:36.871 18:06:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1337517 00:05:36.871 18:06:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.871 18:06:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1337517 00:05:36.871 18:06:02 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1337517 ']' 00:05:36.871 18:06:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.871 18:06:02 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.871 18:06:02 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.871 18:06:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.871 18:06:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.871 [2024-07-26 18:06:02.840380] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:36.871 [2024-07-26 18:06:02.840483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337517 ] 00:05:36.871 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.871 [2024-07-26 18:06:02.875794] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:36.871 [2024-07-26 18:06:02.903088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.871 [2024-07-26 18:06:02.989709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.127 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.127 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:37.127 18:06:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:37.127 18:06:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:37.127 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.127 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.127 { 00:05:37.127 "filename": "/tmp/spdk_mem_dump.txt" 00:05:37.127 } 00:05:37.127 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.127 18:06:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:37.385 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:37.385 1 heaps totaling size 814.000000 MiB 00:05:37.385 size: 814.000000 MiB heap id: 0 00:05:37.385 end heaps---------- 00:05:37.385 8 mempools totaling size 598.116089 MiB 00:05:37.385 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:37.385 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:37.385 size: 84.521057 MiB name: bdev_io_1337517 00:05:37.385 size: 51.011292 MiB name: evtpool_1337517 00:05:37.385 size: 50.003479 MiB name: msgpool_1337517 00:05:37.385 size: 21.763794 MiB name: PDU_Pool 00:05:37.385 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:37.385 size: 0.026123 MiB name: Session_Pool 00:05:37.385 end mempools------- 00:05:37.385 6 memzones totaling size 4.142822 MiB 00:05:37.385 size: 1.000366 MiB name: RG_ring_0_1337517 00:05:37.385 size: 1.000366 MiB name: RG_ring_1_1337517 00:05:37.385 size: 1.000366 MiB name: RG_ring_4_1337517 00:05:37.385 size: 1.000366 MiB name: RG_ring_5_1337517 00:05:37.385 size: 0.125366 MiB name: RG_ring_2_1337517 00:05:37.385 size: 0.015991 MiB name: RG_ring_3_1337517 00:05:37.385 end memzones------- 00:05:37.385 18:06:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:37.385 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:37.385 list of free elements. size: 12.519348 MiB 00:05:37.385 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:37.385 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:37.385 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:37.385 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:37.385 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:37.385 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:37.385 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:37.385 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:37.385 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:37.385 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:37.385 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:37.385 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:37.385 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:37.385 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:37.385 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:37.385 list of standard malloc elements. size: 199.218079 MiB 00:05:37.385 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:37.385 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:37.385 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:37.385 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:37.385 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:37.385 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:37.385 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:37.385 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:37.385 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:37.385 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:37.385 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:37.385 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:37.385 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:37.385 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:37.385 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:37.385 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:37.385 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:37.385 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:37.385 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:37.385 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:37.385 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:37.385 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:37.385 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:37.385 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:37.385 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:37.385 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:37.385 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:37.385 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:37.385 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:37.385 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:37.385 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:37.385 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:37.385 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:37.385 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:37.385 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:37.385 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:37.385 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:37.385 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:37.385 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:37.385 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:37.385 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:37.385 list of memzone associated elements. size: 602.262573 MiB 00:05:37.385 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:37.385 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:37.385 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:37.385 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:37.385 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:37.385 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1337517_0 00:05:37.385 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:37.385 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1337517_0 00:05:37.385 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:37.385 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1337517_0 00:05:37.385 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:37.385 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:37.385 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:37.385 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:37.385 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:37.385 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1337517 00:05:37.385 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:37.385 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1337517 00:05:37.385 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:37.385 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1337517 00:05:37.385 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:37.385 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:37.385 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:37.385 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:37.385 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:37.385 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:37.386 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:37.386 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:37.386 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:37.386 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1337517 00:05:37.386 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:37.386 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1337517 00:05:37.386 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:37.386 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1337517 00:05:37.386 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:37.386 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1337517 00:05:37.386 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:37.386 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1337517 00:05:37.386 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:37.386 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:37.386 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:37.386 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:37.386 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:37.386 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:37.386 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:37.386 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1337517 00:05:37.386 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:37.386 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:37.386 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:37.386 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:37.386 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:37.386 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1337517 00:05:37.386 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:37.386 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:37.386 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:37.386 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1337517 00:05:37.386 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:37.386 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1337517 00:05:37.386 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:37.386 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:37.386 18:06:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:37.386 18:06:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1337517 00:05:37.386 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1337517 ']' 00:05:37.386 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1337517 00:05:37.386 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:37.386 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.386 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1337517 00:05:37.386 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.386 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.386 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1337517' 00:05:37.386 killing process with pid 1337517 00:05:37.386 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1337517 00:05:37.386 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1337517 00:05:37.644 00:05:37.644 real 0m1.048s 00:05:37.644 user 0m1.023s 00:05:37.645 sys 0m0.403s 00:05:37.645 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.645 18:06:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.645 ************************************ 00:05:37.645 END TEST dpdk_mem_utility 00:05:37.645 ************************************ 00:05:37.902 18:06:03 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:37.902 18:06:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.902 18:06:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.902 18:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:37.902 ************************************ 00:05:37.902 START TEST event 00:05:37.902 ************************************ 00:05:37.902 18:06:03 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:37.902 * Looking for test storage... 00:05:37.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:37.902 18:06:03 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:37.902 18:06:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:37.902 18:06:03 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:37.902 18:06:03 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:37.902 18:06:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.902 18:06:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.902 ************************************ 00:05:37.902 START TEST event_perf 00:05:37.902 ************************************ 00:05:37.902 18:06:03 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:37.902 Running I/O for 1 seconds...[2024-07-26 18:06:03.927874] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:37.902 [2024-07-26 18:06:03.927939] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337707 ] 00:05:37.902 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.902 [2024-07-26 18:06:03.963553] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:37.902 [2024-07-26 18:06:03.993613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.160 [2024-07-26 18:06:04.087085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.160 [2024-07-26 18:06:04.087130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.160 [2024-07-26 18:06:04.087220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.160 [2024-07-26 18:06:04.087223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.091 Running I/O for 1 seconds... 00:05:39.091 lcore 0: 226088 00:05:39.091 lcore 1: 226086 00:05:39.091 lcore 2: 226087 00:05:39.091 lcore 3: 226087 00:05:39.091 done. 00:05:39.091 00:05:39.091 real 0m1.256s 00:05:39.091 user 0m4.162s 00:05:39.091 sys 0m0.090s 00:05:39.091 18:06:05 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.091 18:06:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.091 ************************************ 00:05:39.091 END TEST event_perf 00:05:39.091 ************************************ 00:05:39.091 18:06:05 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:39.091 18:06:05 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:39.091 18:06:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.091 18:06:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.091 ************************************ 00:05:39.091 START TEST event_reactor 00:05:39.091 ************************************ 00:05:39.091 18:06:05 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:39.091 [2024-07-26 18:06:05.231859] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:39.091 [2024-07-26 18:06:05.231926] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337862 ] 00:05:39.350 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.350 [2024-07-26 18:06:05.264423] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:39.350 [2024-07-26 18:06:05.294492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.350 [2024-07-26 18:06:05.387134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.722 test_start 00:05:40.722 oneshot 00:05:40.722 tick 100 00:05:40.722 tick 100 00:05:40.722 tick 250 00:05:40.722 tick 100 00:05:40.722 tick 100 00:05:40.722 tick 100 00:05:40.722 tick 250 00:05:40.722 tick 500 00:05:40.722 tick 100 00:05:40.722 tick 100 00:05:40.722 tick 250 00:05:40.722 tick 100 00:05:40.722 tick 100 00:05:40.722 test_end 00:05:40.722 00:05:40.722 real 0m1.250s 00:05:40.722 user 0m1.166s 00:05:40.722 sys 0m0.079s 00:05:40.722 18:06:06 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.722 18:06:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:40.722 ************************************ 00:05:40.722 END TEST event_reactor 00:05:40.722 ************************************ 00:05:40.722 18:06:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.723 18:06:06 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:40.723 18:06:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.723 18:06:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.723 ************************************ 00:05:40.723 START TEST event_reactor_perf 00:05:40.723 ************************************ 00:05:40.723 18:06:06 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.723 [2024-07-26 18:06:06.526827] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:40.723 [2024-07-26 18:06:06.526889] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338022 ] 00:05:40.723 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.723 [2024-07-26 18:06:06.559030] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:40.723 [2024-07-26 18:06:06.589014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.723 [2024-07-26 18:06:06.680608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.654 test_start 00:05:41.654 test_end 00:05:41.654 Performance: 357077 events per second 00:05:41.654 00:05:41.654 real 0m1.242s 00:05:41.654 user 0m1.159s 00:05:41.654 sys 0m0.077s 00:05:41.654 18:06:07 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.654 18:06:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.654 ************************************ 00:05:41.654 END TEST event_reactor_perf 00:05:41.654 ************************************ 00:05:41.654 18:06:07 event -- event/event.sh@49 -- # uname -s 00:05:41.654 18:06:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:41.654 18:06:07 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.654 18:06:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.654 18:06:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.654 18:06:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.912 ************************************ 00:05:41.912 START TEST event_scheduler 00:05:41.912 ************************************ 00:05:41.912 18:06:07 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.912 * Looking for test storage... 00:05:41.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:41.912 18:06:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:41.912 18:06:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1338483 00:05:41.912 18:06:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:41.912 18:06:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.912 18:06:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1338483 00:05:41.912 18:06:07 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1338483 ']' 00:05:41.912 18:06:07 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.912 18:06:07 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.912 18:06:07 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.912 18:06:07 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.912 18:06:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.912 [2024-07-26 18:06:07.901090] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:41.912 [2024-07-26 18:06:07.901167] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338483 ] 00:05:41.912 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.912 [2024-07-26 18:06:07.935880] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:41.912 [2024-07-26 18:06:07.964533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.912 [2024-07-26 18:06:08.056285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.912 [2024-07-26 18:06:08.056310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.912 [2024-07-26 18:06:08.056394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.912 [2024-07-26 18:06:08.056397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.170 18:06:08 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.170 18:06:08 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:42.170 18:06:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:42.170 18:06:08 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.170 18:06:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.170 [2024-07-26 18:06:08.121205] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:42.170 [2024-07-26 18:06:08.121231] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:42.170 [2024-07-26 18:06:08.121247] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:42.170 [2024-07-26 18:06:08.121259] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:42.170 [2024-07-26 18:06:08.121269] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:42.170 18:06:08 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.170 18:06:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:42.170 18:06:08 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.170 18:06:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.170 [2024-07-26 18:06:08.212082] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:42.170 18:06:08 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.170 18:06:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:42.170 18:06:08 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.170 18:06:08 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.170 18:06:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.170 ************************************ 00:05:42.170 START TEST scheduler_create_thread 00:05:42.170 ************************************ 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.170 2 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.170 3 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.170 4 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.170 5 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.170 18:06:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 6 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 7 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 8 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.429 9 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.429 10 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.429 18:06:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.799 18:06:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.799 18:06:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:43.799 18:06:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:43.799 18:06:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.799 18:06:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.731 18:06:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.731 00:05:44.731 real 0m2.617s 00:05:44.731 user 0m0.015s 00:05:44.731 sys 0m0.005s 00:05:44.731 18:06:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.731 18:06:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.731 ************************************ 00:05:44.731 END TEST scheduler_create_thread 00:05:44.731 ************************************ 00:05:44.988 18:06:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:44.988 18:06:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1338483 00:05:44.988 18:06:10 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1338483 ']' 00:05:44.988 18:06:10 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1338483 00:05:44.988 18:06:10 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:44.988 18:06:10 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.988 18:06:10 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1338483 00:05:44.988 18:06:10 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:44.988 18:06:10 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:44.988 18:06:10 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1338483' 00:05:44.988 killing process with pid 1338483 00:05:44.988 18:06:10 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1338483 00:05:44.988 18:06:10 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1338483 00:05:45.246 [2024-07-26 18:06:11.339366] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:45.505 00:05:45.505 real 0m3.762s 00:05:45.505 user 0m5.731s 00:05:45.505 sys 0m0.312s 00:05:45.505 18:06:11 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.505 18:06:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.505 ************************************ 00:05:45.505 END TEST event_scheduler 00:05:45.505 ************************************ 00:05:45.505 18:06:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:45.505 18:06:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:45.505 18:06:11 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.505 18:06:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.505 18:06:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.505 ************************************ 00:05:45.505 START TEST app_repeat 00:05:45.505 ************************************ 00:05:45.505 18:06:11 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1339278 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1339278' 00:05:45.505 Process app_repeat pid: 1339278 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:45.505 spdk_app_start Round 0 00:05:45.505 18:06:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1339278 /var/tmp/spdk-nbd.sock 00:05:45.505 18:06:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1339278 ']' 00:05:45.505 18:06:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.505 18:06:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.505 18:06:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.505 18:06:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.505 18:06:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.505 [2024-07-26 18:06:11.641925] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:45.505 [2024-07-26 18:06:11.641993] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339278 ] 00:05:45.764 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.764 [2024-07-26 18:06:11.673972] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:45.764 [2024-07-26 18:06:11.705616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.764 [2024-07-26 18:06:11.795644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.764 [2024-07-26 18:06:11.795649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.764 18:06:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.764 18:06:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:45.764 18:06:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.022 Malloc0 00:05:46.022 18:06:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.280 Malloc1 00:05:46.538 18:06:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.538 18:06:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.796 /dev/nbd0 00:05:46.796 18:06:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.796 18:06:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.796 18:06:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:46.796 18:06:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:46.796 18:06:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.796 18:06:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.796 18:06:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:46.796 18:06:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:46.796 18:06:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.796 18:06:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.796 18:06:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.796 1+0 records in 00:05:46.796 1+0 records out 00:05:46.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016963 s, 24.1 MB/s 00:05:46.797 18:06:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.797 18:06:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:46.797 18:06:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.797 18:06:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.797 18:06:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:46.797 18:06:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.797 18:06:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.797 18:06:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.055 /dev/nbd1 00:05:47.055 18:06:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:47.055 18:06:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:47.055 18:06:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:47.055 18:06:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:47.055 18:06:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:47.055 18:06:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:47.055 18:06:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:47.055 18:06:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:47.055 18:06:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:47.055 18:06:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:47.055 18:06:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.055 1+0 records in 00:05:47.055 1+0 records out 00:05:47.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207614 s, 19.7 MB/s 00:05:47.055 18:06:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.055 18:06:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:47.055 18:06:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.055 18:06:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:47.055 18:06:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:47.055 18:06:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.055 18:06:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.055 18:06:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.055 18:06:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.055 18:06:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.317 { 00:05:47.317 "nbd_device": "/dev/nbd0", 00:05:47.317 "bdev_name": "Malloc0" 00:05:47.317 }, 00:05:47.317 { 00:05:47.317 "nbd_device": "/dev/nbd1", 00:05:47.317 "bdev_name": "Malloc1" 00:05:47.317 } 00:05:47.317 ]' 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.317 { 00:05:47.317 "nbd_device": "/dev/nbd0", 00:05:47.317 "bdev_name": "Malloc0" 00:05:47.317 }, 00:05:47.317 { 00:05:47.317 "nbd_device": "/dev/nbd1", 00:05:47.317 "bdev_name": "Malloc1" 00:05:47.317 } 00:05:47.317 ]' 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.317 /dev/nbd1' 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.317 /dev/nbd1' 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.317 256+0 records in 00:05:47.317 256+0 records out 00:05:47.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519961 s, 202 MB/s 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.317 256+0 records in 00:05:47.317 256+0 records out 00:05:47.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209635 s, 50.0 MB/s 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.317 256+0 records in 00:05:47.317 256+0 records out 00:05:47.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023854 s, 44.0 MB/s 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.317 18:06:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.574 18:06:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.574 18:06:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.574 18:06:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.574 18:06:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.574 18:06:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.574 18:06:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.574 18:06:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.574 18:06:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.574 18:06:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.574 18:06:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.854 18:06:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.854 18:06:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.854 18:06:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.854 18:06:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.854 18:06:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.854 18:06:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.854 18:06:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.854 18:06:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.854 18:06:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.854 18:06:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.854 18:06:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.131 18:06:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.131 18:06:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.131 18:06:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.131 18:06:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.131 18:06:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.131 18:06:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.131 18:06:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.131 18:06:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.131 18:06:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.131 18:06:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.131 18:06:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.131 18:06:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.131 18:06:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.389 18:06:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.647 [2024-07-26 18:06:14.743341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.905 [2024-07-26 18:06:14.833556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.905 [2024-07-26 18:06:14.833562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.905 [2024-07-26 18:06:14.893412] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.905 [2024-07-26 18:06:14.893488] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.433 18:06:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:51.433 18:06:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:51.433 spdk_app_start Round 1 00:05:51.433 18:06:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1339278 /var/tmp/spdk-nbd.sock 00:05:51.433 18:06:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1339278 ']' 00:05:51.433 18:06:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.433 18:06:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.433 18:06:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.433 18:06:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.433 18:06:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.691 18:06:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.691 18:06:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:51.691 18:06:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.949 Malloc0 00:05:51.949 18:06:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.208 Malloc1 00:05:52.208 18:06:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.208 18:06:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.467 /dev/nbd0 00:05:52.467 18:06:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.467 18:06:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.467 1+0 records in 00:05:52.467 1+0 records out 00:05:52.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000149505 s, 27.4 MB/s 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.467 18:06:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.467 18:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.467 18:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.467 18:06:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.725 /dev/nbd1 00:05:52.725 18:06:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.725 18:06:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.725 1+0 records in 00:05:52.725 1+0 records out 00:05:52.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170532 s, 24.0 MB/s 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.725 18:06:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.725 18:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.725 18:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.725 18:06:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.725 18:06:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.725 18:06:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.983 18:06:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.983 { 00:05:52.983 "nbd_device": "/dev/nbd0", 00:05:52.983 "bdev_name": "Malloc0" 00:05:52.983 }, 00:05:52.983 { 00:05:52.983 "nbd_device": "/dev/nbd1", 00:05:52.983 "bdev_name": "Malloc1" 00:05:52.983 } 00:05:52.983 ]' 00:05:52.983 18:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.983 { 00:05:52.983 "nbd_device": "/dev/nbd0", 00:05:52.983 "bdev_name": "Malloc0" 00:05:52.983 }, 00:05:52.983 { 00:05:52.983 "nbd_device": "/dev/nbd1", 00:05:52.983 "bdev_name": "Malloc1" 00:05:52.983 } 00:05:52.983 ]' 00:05:52.983 18:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.983 18:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.983 /dev/nbd1' 00:05:52.983 18:06:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.983 /dev/nbd1' 00:05:52.983 18:06:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.241 18:06:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.241 18:06:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.242 256+0 records in 00:05:53.242 256+0 records out 00:05:53.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415471 s, 252 MB/s 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.242 256+0 records in 00:05:53.242 256+0 records out 00:05:53.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268768 s, 39.0 MB/s 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.242 256+0 records in 00:05:53.242 256+0 records out 00:05:53.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243931 s, 43.0 MB/s 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.242 18:06:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.500 18:06:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.500 18:06:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.500 18:06:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.500 18:06:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.500 18:06:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.500 18:06:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.500 18:06:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.500 18:06:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.500 18:06:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.500 18:06:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.758 18:06:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.758 18:06:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.758 18:06:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.758 18:06:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.758 18:06:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.758 18:06:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.758 18:06:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.758 18:06:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.758 18:06:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.758 18:06:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.758 18:06:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.017 18:06:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.017 18:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.017 18:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.017 18:06:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.017 18:06:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.017 18:06:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.017 18:06:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.017 18:06:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.017 18:06:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.017 18:06:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.017 18:06:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.017 18:06:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.017 18:06:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.275 18:06:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.533 [2024-07-26 18:06:20.542117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.533 [2024-07-26 18:06:20.631285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.533 [2024-07-26 18:06:20.631290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.792 [2024-07-26 18:06:20.694025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.792 [2024-07-26 18:06:20.694122] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.320 18:06:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.320 18:06:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:57.320 spdk_app_start Round 2 00:05:57.320 18:06:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1339278 /var/tmp/spdk-nbd.sock 00:05:57.320 18:06:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1339278 ']' 00:05:57.320 18:06:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.320 18:06:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.320 18:06:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.320 18:06:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.320 18:06:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.577 18:06:23 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.577 18:06:23 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:57.577 18:06:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.835 Malloc0 00:05:57.835 18:06:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.094 Malloc1 00:05:58.094 18:06:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.094 18:06:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.353 /dev/nbd0 00:05:58.353 18:06:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.353 18:06:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.353 1+0 records in 00:05:58.353 1+0 records out 00:05:58.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000141525 s, 28.9 MB/s 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.353 18:06:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:58.353 18:06:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.353 18:06:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.353 18:06:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.615 /dev/nbd1 00:05:58.615 18:06:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.615 18:06:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.615 1+0 records in 00:05:58.615 1+0 records out 00:05:58.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204294 s, 20.0 MB/s 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.615 18:06:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:58.615 18:06:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.615 18:06:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.615 18:06:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.615 18:06:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.615 18:06:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.875 18:06:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.875 { 00:05:58.875 "nbd_device": "/dev/nbd0", 00:05:58.876 "bdev_name": "Malloc0" 00:05:58.876 }, 00:05:58.876 { 00:05:58.876 "nbd_device": "/dev/nbd1", 00:05:58.876 "bdev_name": "Malloc1" 00:05:58.876 } 00:05:58.876 ]' 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.876 { 00:05:58.876 "nbd_device": "/dev/nbd0", 00:05:58.876 "bdev_name": "Malloc0" 00:05:58.876 }, 00:05:58.876 { 00:05:58.876 "nbd_device": "/dev/nbd1", 00:05:58.876 "bdev_name": "Malloc1" 00:05:58.876 } 00:05:58.876 ]' 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.876 /dev/nbd1' 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.876 /dev/nbd1' 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.876 256+0 records in 00:05:58.876 256+0 records out 00:05:58.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488478 s, 215 MB/s 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.876 256+0 records in 00:05:58.876 256+0 records out 00:05:58.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223903 s, 46.8 MB/s 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.876 256+0 records in 00:05:58.876 256+0 records out 00:05:58.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02807 s, 37.4 MB/s 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.876 18:06:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.876 18:06:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.876 18:06:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.876 18:06:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.876 18:06:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.876 18:06:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.876 18:06:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.876 18:06:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.876 18:06:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.876 18:06:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.876 18:06:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.135 18:06:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.135 18:06:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.135 18:06:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.135 18:06:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.135 18:06:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.135 18:06:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.392 18:06:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.392 18:06:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.392 18:06:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.392 18:06:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.651 18:06:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.910 18:06:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.910 18:06:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.910 18:06:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.910 18:06:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.910 18:06:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.910 18:06:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.910 18:06:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.910 18:06:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.910 18:06:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.910 18:06:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.170 18:06:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.430 [2024-07-26 18:06:26.332451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.430 [2024-07-26 18:06:26.422019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.430 [2024-07-26 18:06:26.422023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.430 [2024-07-26 18:06:26.484732] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.430 [2024-07-26 18:06:26.484815] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.973 18:06:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1339278 /var/tmp/spdk-nbd.sock 00:06:02.973 18:06:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1339278 ']' 00:06:02.973 18:06:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.973 18:06:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.973 18:06:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.973 18:06:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.973 18:06:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.232 18:06:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.232 18:06:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:03.232 18:06:29 event.app_repeat -- event/event.sh@39 -- # killprocess 1339278 00:06:03.232 18:06:29 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1339278 ']' 00:06:03.232 18:06:29 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1339278 00:06:03.232 18:06:29 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:03.232 18:06:29 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.232 18:06:29 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1339278 00:06:03.491 18:06:29 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.491 18:06:29 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.491 18:06:29 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1339278' 00:06:03.491 killing process with pid 1339278 00:06:03.491 18:06:29 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1339278 00:06:03.491 18:06:29 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1339278 00:06:03.491 spdk_app_start is called in Round 0. 00:06:03.491 Shutdown signal received, stop current app iteration 00:06:03.491 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 reinitialization... 00:06:03.491 spdk_app_start is called in Round 1. 00:06:03.491 Shutdown signal received, stop current app iteration 00:06:03.491 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 reinitialization... 00:06:03.491 spdk_app_start is called in Round 2. 00:06:03.491 Shutdown signal received, stop current app iteration 00:06:03.491 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 reinitialization... 00:06:03.491 spdk_app_start is called in Round 3. 00:06:03.491 Shutdown signal received, stop current app iteration 00:06:03.491 18:06:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:03.491 18:06:29 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:03.491 00:06:03.491 real 0m17.966s 00:06:03.491 user 0m39.148s 00:06:03.491 sys 0m3.245s 00:06:03.491 18:06:29 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.491 18:06:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.491 ************************************ 00:06:03.491 END TEST app_repeat 00:06:03.491 ************************************ 00:06:03.491 18:06:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:03.491 18:06:29 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:03.491 18:06:29 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.491 18:06:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.491 18:06:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.491 ************************************ 00:06:03.491 START TEST cpu_locks 00:06:03.491 ************************************ 00:06:03.491 18:06:29 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:03.750 * Looking for test storage... 00:06:03.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:03.750 18:06:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:03.750 18:06:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:03.750 18:06:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:03.750 18:06:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:03.750 18:06:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.750 18:06:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.750 18:06:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.750 ************************************ 00:06:03.750 START TEST default_locks 00:06:03.750 ************************************ 00:06:03.750 18:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:03.750 18:06:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1341630 00:06:03.750 18:06:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.750 18:06:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1341630 00:06:03.750 18:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1341630 ']' 00:06:03.750 18:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.750 18:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.750 18:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.750 18:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.750 18:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.750 [2024-07-26 18:06:29.764304] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:03.750 [2024-07-26 18:06:29.764400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341630 ] 00:06:03.750 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.750 [2024-07-26 18:06:29.795533] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.750 [2024-07-26 18:06:29.821082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.008 [2024-07-26 18:06:29.906558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.267 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.267 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:04.267 18:06:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1341630 00:06:04.267 18:06:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1341630 00:06:04.267 18:06:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.526 lslocks: write error 00:06:04.527 18:06:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1341630 00:06:04.527 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1341630 ']' 00:06:04.527 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1341630 00:06:04.527 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:04.527 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.527 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1341630 00:06:04.527 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.527 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.527 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1341630' 00:06:04.527 killing process with pid 1341630 00:06:04.527 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1341630 00:06:04.527 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1341630 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1341630 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1341630 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1341630 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1341630 ']' 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1341630) - No such process 00:06:04.805 ERROR: process (pid: 1341630) is no longer running 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:04.805 00:06:04.805 real 0m1.155s 00:06:04.805 user 0m1.090s 00:06:04.805 sys 0m0.535s 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.805 18:06:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.805 ************************************ 00:06:04.805 END TEST default_locks 00:06:04.805 ************************************ 00:06:04.805 18:06:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:04.805 18:06:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.805 18:06:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.805 18:06:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.805 ************************************ 00:06:04.805 START TEST default_locks_via_rpc 00:06:04.805 ************************************ 00:06:04.805 18:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:04.805 18:06:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1341793 00:06:04.805 18:06:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.805 18:06:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1341793 00:06:04.805 18:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1341793 ']' 00:06:04.805 18:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.805 18:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.805 18:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.805 18:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.805 18:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.074 [2024-07-26 18:06:30.964419] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:05.074 [2024-07-26 18:06:30.964518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341793 ] 00:06:05.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.074 [2024-07-26 18:06:30.997100] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.074 [2024-07-26 18:06:31.023659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.074 [2024-07-26 18:06:31.112745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1341793 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1341793 00:06:05.334 18:06:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.593 18:06:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1341793 00:06:05.593 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1341793 ']' 00:06:05.593 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1341793 00:06:05.593 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:05.593 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.593 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1341793 00:06:05.853 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.853 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.853 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1341793' 00:06:05.853 killing process with pid 1341793 00:06:05.853 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1341793 00:06:05.853 18:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1341793 00:06:06.113 00:06:06.113 real 0m1.235s 00:06:06.113 user 0m1.175s 00:06:06.113 sys 0m0.538s 00:06:06.113 18:06:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.113 18:06:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.113 ************************************ 00:06:06.113 END TEST default_locks_via_rpc 00:06:06.113 ************************************ 00:06:06.113 18:06:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:06.113 18:06:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.113 18:06:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.113 18:06:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.113 ************************************ 00:06:06.113 START TEST non_locking_app_on_locked_coremask 00:06:06.113 ************************************ 00:06:06.113 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:06.113 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1341962 00:06:06.113 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.113 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1341962 /var/tmp/spdk.sock 00:06:06.113 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1341962 ']' 00:06:06.113 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.113 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.113 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.113 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.113 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.113 [2024-07-26 18:06:32.245819] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:06.113 [2024-07-26 18:06:32.245921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341962 ] 00:06:06.373 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.373 [2024-07-26 18:06:32.278863] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.373 [2024-07-26 18:06:32.304652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.373 [2024-07-26 18:06:32.389289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.632 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.632 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:06.632 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1341970 00:06:06.632 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:06.632 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1341970 /var/tmp/spdk2.sock 00:06:06.632 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1341970 ']' 00:06:06.632 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.632 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.632 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.632 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.632 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.632 [2024-07-26 18:06:32.684797] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:06.632 [2024-07-26 18:06:32.684888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341970 ] 00:06:06.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.632 [2024-07-26 18:06:32.717763] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.632 [2024-07-26 18:06:32.775569] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.632 [2024-07-26 18:06:32.775599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.891 [2024-07-26 18:06:32.959015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.827 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.827 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:07.827 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1341962 00:06:07.827 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1341962 00:06:07.827 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.827 lslocks: write error 00:06:07.827 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1341962 00:06:07.827 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1341962 ']' 00:06:07.827 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1341962 00:06:07.827 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:07.827 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.827 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1341962 00:06:08.085 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.085 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.085 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1341962' 00:06:08.085 killing process with pid 1341962 00:06:08.085 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1341962 00:06:08.085 18:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1341962 00:06:08.653 18:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1341970 00:06:08.653 18:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1341970 ']' 00:06:08.653 18:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1341970 00:06:08.653 18:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:08.913 18:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.913 18:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1341970 00:06:08.913 18:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.913 18:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.913 18:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1341970' 00:06:08.913 killing process with pid 1341970 00:06:08.913 18:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1341970 00:06:08.913 18:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1341970 00:06:09.172 00:06:09.172 real 0m3.037s 00:06:09.172 user 0m3.174s 00:06:09.172 sys 0m1.012s 00:06:09.172 18:06:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.172 18:06:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.172 ************************************ 00:06:09.172 END TEST non_locking_app_on_locked_coremask 00:06:09.172 ************************************ 00:06:09.172 18:06:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:09.172 18:06:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.172 18:06:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.172 18:06:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.172 ************************************ 00:06:09.172 START TEST locking_app_on_unlocked_coremask 00:06:09.172 ************************************ 00:06:09.172 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:09.172 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1342393 00:06:09.172 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:09.172 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1342393 /var/tmp/spdk.sock 00:06:09.172 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1342393 ']' 00:06:09.172 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.172 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.172 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.172 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.172 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.432 [2024-07-26 18:06:35.329700] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:09.432 [2024-07-26 18:06:35.329804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342393 ] 00:06:09.432 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.432 [2024-07-26 18:06:35.361768] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.432 [2024-07-26 18:06:35.388169] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.432 [2024-07-26 18:06:35.388194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.432 [2024-07-26 18:06:35.477334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.690 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.690 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:09.690 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1342403 00:06:09.690 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.690 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1342403 /var/tmp/spdk2.sock 00:06:09.690 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1342403 ']' 00:06:09.690 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.690 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.690 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.690 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.690 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.690 [2024-07-26 18:06:35.771161] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:09.690 [2024-07-26 18:06:35.771243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342403 ] 00:06:09.690 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.690 [2024-07-26 18:06:35.804773] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.948 [2024-07-26 18:06:35.863506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.948 [2024-07-26 18:06:36.046319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.883 18:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.883 18:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:10.883 18:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1342403 00:06:10.883 18:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.883 18:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1342403 00:06:11.143 lslocks: write error 00:06:11.143 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1342393 00:06:11.143 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1342393 ']' 00:06:11.143 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1342393 00:06:11.143 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:11.143 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.143 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1342393 00:06:11.143 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.143 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.143 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1342393' 00:06:11.143 killing process with pid 1342393 00:06:11.143 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1342393 00:06:11.143 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1342393 00:06:12.083 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1342403 00:06:12.083 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1342403 ']' 00:06:12.083 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1342403 00:06:12.083 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:12.083 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.083 18:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1342403 00:06:12.083 18:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.083 18:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.083 18:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1342403' 00:06:12.083 killing process with pid 1342403 00:06:12.083 18:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1342403 00:06:12.083 18:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1342403 00:06:12.343 00:06:12.343 real 0m3.151s 00:06:12.343 user 0m3.287s 00:06:12.343 sys 0m1.009s 00:06:12.343 18:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.343 18:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.343 ************************************ 00:06:12.343 END TEST locking_app_on_unlocked_coremask 00:06:12.343 ************************************ 00:06:12.343 18:06:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:12.343 18:06:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.343 18:06:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.343 18:06:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.343 ************************************ 00:06:12.343 START TEST locking_app_on_locked_coremask 00:06:12.343 ************************************ 00:06:12.343 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:12.343 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1342710 00:06:12.343 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.343 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1342710 /var/tmp/spdk.sock 00:06:12.343 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1342710 ']' 00:06:12.343 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.343 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.343 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.343 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.343 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.602 [2024-07-26 18:06:38.527889] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:12.602 [2024-07-26 18:06:38.527991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342710 ] 00:06:12.602 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.602 [2024-07-26 18:06:38.560246] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.602 [2024-07-26 18:06:38.587140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.603 [2024-07-26 18:06:38.673501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1342833 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1342833 /var/tmp/spdk2.sock 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1342833 /var/tmp/spdk2.sock 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1342833 /var/tmp/spdk2.sock 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1342833 ']' 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.861 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.861 [2024-07-26 18:06:38.970734] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:12.861 [2024-07-26 18:06:38.970827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342833 ] 00:06:12.861 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.121 [2024-07-26 18:06:39.005848] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.121 [2024-07-26 18:06:39.064142] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1342710 has claimed it. 00:06:13.121 [2024-07-26 18:06:39.064188] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:13.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1342833) - No such process 00:06:13.690 ERROR: process (pid: 1342833) is no longer running 00:06:13.690 18:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.690 18:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:13.690 18:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:13.690 18:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.690 18:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.690 18:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.690 18:06:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1342710 00:06:13.690 18:06:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1342710 00:06:13.690 18:06:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.950 lslocks: write error 00:06:13.950 18:06:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1342710 00:06:13.950 18:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1342710 ']' 00:06:13.950 18:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1342710 00:06:13.950 18:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.950 18:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.950 18:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1342710 00:06:13.950 18:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.950 18:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.950 18:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1342710' 00:06:13.950 killing process with pid 1342710 00:06:13.950 18:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1342710 00:06:13.950 18:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1342710 00:06:14.519 00:06:14.519 real 0m1.972s 00:06:14.519 user 0m2.153s 00:06:14.519 sys 0m0.627s 00:06:14.519 18:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.519 18:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.519 ************************************ 00:06:14.519 END TEST locking_app_on_locked_coremask 00:06:14.519 ************************************ 00:06:14.519 18:06:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:14.519 18:06:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.519 18:06:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.519 18:06:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.519 ************************************ 00:06:14.519 START TEST locking_overlapped_coremask 00:06:14.519 ************************************ 00:06:14.519 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:14.519 18:06:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1343008 00:06:14.519 18:06:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:14.519 18:06:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1343008 /var/tmp/spdk.sock 00:06:14.519 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1343008 ']' 00:06:14.519 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.519 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.519 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.519 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.519 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.519 [2024-07-26 18:06:40.546929] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:14.519 [2024-07-26 18:06:40.547039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343008 ] 00:06:14.519 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.519 [2024-07-26 18:06:40.579538] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.519 [2024-07-26 18:06:40.605420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.778 [2024-07-26 18:06:40.696162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.778 [2024-07-26 18:06:40.696222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.778 [2024-07-26 18:06:40.696225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1343128 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1343128 /var/tmp/spdk2.sock 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1343128 /var/tmp/spdk2.sock 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1343128 /var/tmp/spdk2.sock 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1343128 ']' 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.036 18:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.036 [2024-07-26 18:06:40.989864] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:15.036 [2024-07-26 18:06:40.989944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343128 ] 00:06:15.036 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.036 [2024-07-26 18:06:41.024140] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.036 [2024-07-26 18:06:41.079186] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1343008 has claimed it. 00:06:15.036 [2024-07-26 18:06:41.079228] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1343128) - No such process 00:06:15.602 ERROR: process (pid: 1343128) is no longer running 00:06:15.602 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.602 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:15.602 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:15.602 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.602 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.602 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.602 18:06:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:15.602 18:06:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.602 18:06:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.602 18:06:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.602 18:06:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1343008 00:06:15.602 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1343008 ']' 00:06:15.603 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1343008 00:06:15.603 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.603 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.603 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1343008 00:06:15.603 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.603 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.603 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1343008' 00:06:15.603 killing process with pid 1343008 00:06:15.603 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1343008 00:06:15.603 18:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1343008 00:06:16.169 00:06:16.169 real 0m1.625s 00:06:16.169 user 0m4.396s 00:06:16.169 sys 0m0.449s 00:06:16.169 18:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.169 18:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.169 ************************************ 00:06:16.169 END TEST locking_overlapped_coremask 00:06:16.169 ************************************ 00:06:16.169 18:06:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:16.169 18:06:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.169 18:06:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.169 18:06:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.169 ************************************ 00:06:16.169 START TEST locking_overlapped_coremask_via_rpc 00:06:16.169 ************************************ 00:06:16.169 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:16.169 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1343296 00:06:16.169 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:16.169 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1343296 /var/tmp/spdk.sock 00:06:16.169 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1343296 ']' 00:06:16.169 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.169 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.169 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.169 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.169 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.169 [2024-07-26 18:06:42.223358] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:16.169 [2024-07-26 18:06:42.223449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343296 ] 00:06:16.169 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.169 [2024-07-26 18:06:42.254646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.169 [2024-07-26 18:06:42.284396] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.169 [2024-07-26 18:06:42.284426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.427 [2024-07-26 18:06:42.377992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.427 [2024-07-26 18:06:42.378047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.427 [2024-07-26 18:06:42.378050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.686 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.686 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:16.686 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1343313 00:06:16.686 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:16.686 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1343313 /var/tmp/spdk2.sock 00:06:16.686 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1343313 ']' 00:06:16.686 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.686 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.686 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.686 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.686 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.686 [2024-07-26 18:06:42.667501] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:16.686 [2024-07-26 18:06:42.667583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343313 ] 00:06:16.686 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.686 [2024-07-26 18:06:42.702118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.686 [2024-07-26 18:06:42.756811] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.686 [2024-07-26 18:06:42.756836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.944 [2024-07-26 18:06:42.933075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.944 [2024-07-26 18:06:42.933138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:16.944 [2024-07-26 18:06:42.933140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.510 [2024-07-26 18:06:43.620169] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1343296 has claimed it. 00:06:17.510 request: 00:06:17.510 { 00:06:17.510 "method": "framework_enable_cpumask_locks", 00:06:17.510 "req_id": 1 00:06:17.510 } 00:06:17.510 Got JSON-RPC error response 00:06:17.510 response: 00:06:17.510 { 00:06:17.510 "code": -32603, 00:06:17.510 "message": "Failed to claim CPU core: 2" 00:06:17.510 } 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1343296 /var/tmp/spdk.sock 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1343296 ']' 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.510 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.768 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.768 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.768 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1343313 /var/tmp/spdk2.sock 00:06:17.768 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1343313 ']' 00:06:17.768 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.768 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.768 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.768 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.768 18:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.029 18:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.029 18:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.029 18:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:18.029 18:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.029 18:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.029 18:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.029 00:06:18.029 real 0m1.942s 00:06:18.029 user 0m1.003s 00:06:18.029 sys 0m0.165s 00:06:18.029 18:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.029 18:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.029 ************************************ 00:06:18.029 END TEST locking_overlapped_coremask_via_rpc 00:06:18.029 ************************************ 00:06:18.029 18:06:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:18.029 18:06:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1343296 ]] 00:06:18.029 18:06:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1343296 00:06:18.029 18:06:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1343296 ']' 00:06:18.029 18:06:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1343296 00:06:18.029 18:06:44 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:18.029 18:06:44 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.029 18:06:44 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1343296 00:06:18.029 18:06:44 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.029 18:06:44 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.029 18:06:44 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1343296' 00:06:18.029 killing process with pid 1343296 00:06:18.029 18:06:44 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1343296 00:06:18.029 18:06:44 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1343296 00:06:18.600 18:06:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1343313 ]] 00:06:18.600 18:06:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1343313 00:06:18.600 18:06:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1343313 ']' 00:06:18.600 18:06:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1343313 00:06:18.600 18:06:44 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:18.600 18:06:44 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.600 18:06:44 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1343313 00:06:18.600 18:06:44 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:18.600 18:06:44 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:18.600 18:06:44 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1343313' 00:06:18.600 killing process with pid 1343313 00:06:18.600 18:06:44 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1343313 00:06:18.600 18:06:44 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1343313 00:06:18.859 18:06:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.859 18:06:44 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:18.860 18:06:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1343296 ]] 00:06:18.860 18:06:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1343296 00:06:18.860 18:06:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1343296 ']' 00:06:18.860 18:06:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1343296 00:06:18.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1343296) - No such process 00:06:18.860 18:06:44 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1343296 is not found' 00:06:18.860 Process with pid 1343296 is not found 00:06:18.860 18:06:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1343313 ]] 00:06:18.860 18:06:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1343313 00:06:18.860 18:06:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1343313 ']' 00:06:18.860 18:06:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1343313 00:06:18.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1343313) - No such process 00:06:18.860 18:06:44 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1343313 is not found' 00:06:18.860 Process with pid 1343313 is not found 00:06:18.860 18:06:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.860 00:06:18.860 real 0m15.361s 00:06:18.860 user 0m26.905s 00:06:18.860 sys 0m5.228s 00:06:18.860 18:06:44 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.860 18:06:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.860 ************************************ 00:06:18.860 END TEST cpu_locks 00:06:18.860 ************************************ 00:06:19.123 00:06:19.123 real 0m41.182s 00:06:19.123 user 1m18.416s 00:06:19.123 sys 0m9.255s 00:06:19.123 18:06:45 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.123 18:06:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.123 ************************************ 00:06:19.123 END TEST event 00:06:19.123 ************************************ 00:06:19.123 18:06:45 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:19.123 18:06:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.123 18:06:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.123 18:06:45 -- common/autotest_common.sh@10 -- # set +x 00:06:19.123 ************************************ 00:06:19.123 START TEST thread 00:06:19.123 ************************************ 00:06:19.123 18:06:45 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:19.123 * Looking for test storage... 00:06:19.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:19.123 18:06:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.123 18:06:45 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:19.123 18:06:45 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.123 18:06:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.123 ************************************ 00:06:19.123 START TEST thread_poller_perf 00:06:19.123 ************************************ 00:06:19.123 18:06:45 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.123 [2024-07-26 18:06:45.155177] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:19.123 [2024-07-26 18:06:45.155244] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343674 ] 00:06:19.123 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.123 [2024-07-26 18:06:45.188628] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.123 [2024-07-26 18:06:45.215464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.400 [2024-07-26 18:06:45.306345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.400 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:20.347 ====================================== 00:06:20.347 busy:2714455716 (cyc) 00:06:20.347 total_run_count: 293000 00:06:20.347 tsc_hz: 2700000000 (cyc) 00:06:20.347 ====================================== 00:06:20.347 poller_cost: 9264 (cyc), 3431 (nsec) 00:06:20.347 00:06:20.347 real 0m1.259s 00:06:20.347 user 0m1.170s 00:06:20.347 sys 0m0.083s 00:06:20.347 18:06:46 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.347 18:06:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.347 ************************************ 00:06:20.347 END TEST thread_poller_perf 00:06:20.347 ************************************ 00:06:20.347 18:06:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.347 18:06:46 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:20.347 18:06:46 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.347 18:06:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.347 ************************************ 00:06:20.347 START TEST thread_poller_perf 00:06:20.347 ************************************ 00:06:20.347 18:06:46 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.347 [2024-07-26 18:06:46.463106] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:20.347 [2024-07-26 18:06:46.463169] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343836 ] 00:06:20.607 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.607 [2024-07-26 18:06:46.496121] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.607 [2024-07-26 18:06:46.525925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.607 [2024-07-26 18:06:46.617013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.607 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:21.984 ====================================== 00:06:21.984 busy:2702316634 (cyc) 00:06:21.984 total_run_count: 3923000 00:06:21.984 tsc_hz: 2700000000 (cyc) 00:06:21.984 ====================================== 00:06:21.984 poller_cost: 688 (cyc), 254 (nsec) 00:06:21.984 00:06:21.984 real 0m1.246s 00:06:21.984 user 0m1.160s 00:06:21.984 sys 0m0.080s 00:06:21.984 18:06:47 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.984 18:06:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.984 ************************************ 00:06:21.984 END TEST thread_poller_perf 00:06:21.984 ************************************ 00:06:21.984 18:06:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:21.984 00:06:21.984 real 0m2.660s 00:06:21.984 user 0m2.406s 00:06:21.984 sys 0m0.253s 00:06:21.984 18:06:47 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.984 18:06:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.984 ************************************ 00:06:21.984 END TEST thread 00:06:21.984 ************************************ 00:06:21.984 18:06:47 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:21.984 18:06:47 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.984 18:06:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.985 18:06:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.985 18:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.985 ************************************ 00:06:21.985 START TEST app_cmdline 00:06:21.985 ************************************ 00:06:21.985 18:06:47 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.985 * Looking for test storage... 00:06:21.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:21.985 18:06:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:21.985 18:06:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1344142 00:06:21.985 18:06:47 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:21.985 18:06:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1344142 00:06:21.985 18:06:47 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1344142 ']' 00:06:21.985 18:06:47 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.985 18:06:47 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.985 18:06:47 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.985 18:06:47 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.985 18:06:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.985 [2024-07-26 18:06:47.865857] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:21.985 [2024-07-26 18:06:47.865933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344142 ] 00:06:21.985 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.985 [2024-07-26 18:06:47.896917] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.985 [2024-07-26 18:06:47.924008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.985 [2024-07-26 18:06:48.010029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.243 18:06:48 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.243 18:06:48 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:22.243 18:06:48 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:22.503 { 00:06:22.503 "version": "SPDK v24.09-pre git sha1 704257090", 00:06:22.503 "fields": { 00:06:22.503 "major": 24, 00:06:22.503 "minor": 9, 00:06:22.503 "patch": 0, 00:06:22.503 "suffix": "-pre", 00:06:22.503 "commit": "704257090" 00:06:22.503 } 00:06:22.503 } 00:06:22.503 18:06:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:22.503 18:06:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:22.503 18:06:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:22.503 18:06:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:22.503 18:06:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:22.503 18:06:48 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.503 18:06:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.503 18:06:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:22.503 18:06:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:22.503 18:06:48 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.503 18:06:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:22.503 18:06:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:22.503 18:06:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.503 18:06:48 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:22.504 18:06:48 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.504 18:06:48 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.504 18:06:48 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.504 18:06:48 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.504 18:06:48 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.504 18:06:48 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.504 18:06:48 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.504 18:06:48 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.504 18:06:48 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:22.504 18:06:48 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.762 request: 00:06:22.762 { 00:06:22.762 "method": "env_dpdk_get_mem_stats", 00:06:22.762 "req_id": 1 00:06:22.762 } 00:06:22.762 Got JSON-RPC error response 00:06:22.762 response: 00:06:22.762 { 00:06:22.762 "code": -32601, 00:06:22.762 "message": "Method not found" 00:06:22.762 } 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.762 18:06:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1344142 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1344142 ']' 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1344142 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1344142 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1344142' 00:06:22.762 killing process with pid 1344142 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@969 -- # kill 1344142 00:06:22.762 18:06:48 app_cmdline -- common/autotest_common.sh@974 -- # wait 1344142 00:06:23.329 00:06:23.329 real 0m1.449s 00:06:23.329 user 0m1.769s 00:06:23.329 sys 0m0.449s 00:06:23.329 18:06:49 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.329 18:06:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.329 ************************************ 00:06:23.329 END TEST app_cmdline 00:06:23.329 ************************************ 00:06:23.329 18:06:49 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.329 18:06:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.329 18:06:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.329 18:06:49 -- common/autotest_common.sh@10 -- # set +x 00:06:23.329 ************************************ 00:06:23.329 START TEST version 00:06:23.329 ************************************ 00:06:23.329 18:06:49 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.329 * Looking for test storage... 00:06:23.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:23.329 18:06:49 version -- app/version.sh@17 -- # get_header_version major 00:06:23.329 18:06:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.329 18:06:49 version -- app/version.sh@14 -- # cut -f2 00:06:23.329 18:06:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.329 18:06:49 version -- app/version.sh@17 -- # major=24 00:06:23.329 18:06:49 version -- app/version.sh@18 -- # get_header_version minor 00:06:23.329 18:06:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.329 18:06:49 version -- app/version.sh@14 -- # cut -f2 00:06:23.329 18:06:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.329 18:06:49 version -- app/version.sh@18 -- # minor=9 00:06:23.329 18:06:49 version -- app/version.sh@19 -- # get_header_version patch 00:06:23.329 18:06:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.329 18:06:49 version -- app/version.sh@14 -- # cut -f2 00:06:23.329 18:06:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.329 18:06:49 version -- app/version.sh@19 -- # patch=0 00:06:23.329 18:06:49 version -- app/version.sh@20 -- # get_header_version suffix 00:06:23.329 18:06:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.329 18:06:49 version -- app/version.sh@14 -- # cut -f2 00:06:23.329 18:06:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.329 18:06:49 version -- app/version.sh@20 -- # suffix=-pre 00:06:23.329 18:06:49 version -- app/version.sh@22 -- # version=24.9 00:06:23.329 18:06:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:23.329 18:06:49 version -- app/version.sh@28 -- # version=24.9rc0 00:06:23.329 18:06:49 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:23.329 18:06:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:23.329 18:06:49 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:23.329 18:06:49 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:23.329 00:06:23.329 real 0m0.101s 00:06:23.329 user 0m0.057s 00:06:23.329 sys 0m0.065s 00:06:23.329 18:06:49 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.329 18:06:49 version -- common/autotest_common.sh@10 -- # set +x 00:06:23.329 ************************************ 00:06:23.329 END TEST version 00:06:23.329 ************************************ 00:06:23.329 18:06:49 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:23.329 18:06:49 -- spdk/autotest.sh@202 -- # uname -s 00:06:23.329 18:06:49 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:23.329 18:06:49 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:23.330 18:06:49 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:23.330 18:06:49 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:23.330 18:06:49 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:23.330 18:06:49 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:23.330 18:06:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.330 18:06:49 -- common/autotest_common.sh@10 -- # set +x 00:06:23.330 18:06:49 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:23.330 18:06:49 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:23.330 18:06:49 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:23.330 18:06:49 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:23.330 18:06:49 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:23.330 18:06:49 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:23.330 18:06:49 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.330 18:06:49 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:23.330 18:06:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.330 18:06:49 -- common/autotest_common.sh@10 -- # set +x 00:06:23.330 ************************************ 00:06:23.330 START TEST nvmf_tcp 00:06:23.330 ************************************ 00:06:23.330 18:06:49 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.330 * Looking for test storage... 00:06:23.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:23.330 18:06:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:23.330 18:06:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:23.330 18:06:49 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:23.330 18:06:49 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:23.330 18:06:49 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.330 18:06:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.588 ************************************ 00:06:23.588 START TEST nvmf_target_core 00:06:23.589 ************************************ 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:23.589 * Looking for test storage... 00:06:23.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:23.589 ************************************ 00:06:23.589 START TEST nvmf_abort 00:06:23.589 ************************************ 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:23.589 * Looking for test storage... 00:06:23.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.589 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:23.590 18:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.120 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:26.120 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:26.121 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:26.121 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:26.121 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:26.121 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:26.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:06:26.121 00:06:26.121 --- 10.0.0.2 ping statistics --- 00:06:26.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.121 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:26.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:06:26.121 00:06:26.121 --- 10.0.0.1 ping statistics --- 00:06:26.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.121 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:26.121 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:26.122 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:26.122 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.122 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1346075 00:06:26.122 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:26.122 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1346075 00:06:26.122 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1346075 ']' 00:06:26.122 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.122 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.122 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.122 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.122 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.122 [2024-07-26 18:06:51.875026] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:26.122 [2024-07-26 18:06:51.875143] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.122 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.122 [2024-07-26 18:06:51.913251] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.122 [2024-07-26 18:06:51.946527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.122 [2024-07-26 18:06:52.038586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:26.122 [2024-07-26 18:06:52.038653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:26.122 [2024-07-26 18:06:52.038684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.122 [2024-07-26 18:06:52.038706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.122 [2024-07-26 18:06:52.038725] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:26.122 [2024-07-26 18:06:52.038799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.122 [2024-07-26 18:06:52.038912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.122 [2024-07-26 18:06:52.038918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.122 [2024-07-26 18:06:52.184197] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.122 Malloc0 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.122 Delay0 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.122 [2024-07-26 18:06:52.256732] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.122 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.381 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.381 18:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:26.381 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.381 [2024-07-26 18:06:52.363312] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:28.912 Initializing NVMe Controllers 00:06:28.912 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:28.912 controller IO queue size 128 less than required 00:06:28.912 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:28.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:28.912 Initialization complete. Launching workers. 00:06:28.912 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 32957 00:06:28.912 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33021, failed to submit 62 00:06:28.912 success 32961, unsuccess 60, failed 0 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:28.912 rmmod nvme_tcp 00:06:28.912 rmmod nvme_fabrics 00:06:28.912 rmmod nvme_keyring 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1346075 ']' 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1346075 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1346075 ']' 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1346075 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1346075 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1346075' 00:06:28.912 killing process with pid 1346075 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1346075 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1346075 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.912 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.814 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:30.814 00:06:30.814 real 0m7.380s 00:06:30.814 user 0m10.885s 00:06:30.814 sys 0m2.539s 00:06:30.814 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.814 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.814 ************************************ 00:06:30.814 END TEST nvmf_abort 00:06:30.814 ************************************ 00:06:31.073 18:06:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:31.073 18:06:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:31.073 18:06:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.073 18:06:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:31.073 ************************************ 00:06:31.073 START TEST nvmf_ns_hotplug_stress 00:06:31.073 ************************************ 00:06:31.073 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:31.073 * Looking for test storage... 00:06:31.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:31.073 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:32.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:32.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:32.978 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:32.979 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:32.979 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:32.979 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:33.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:33.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:06:33.237 00:06:33.237 --- 10.0.0.2 ping statistics --- 00:06:33.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.237 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:33.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:33.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:06:33.237 00:06:33.237 --- 10.0.0.1 ping statistics --- 00:06:33.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.237 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1348412 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1348412 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1348412 ']' 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.237 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.238 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.238 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:33.238 [2024-07-26 18:06:59.228126] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:33.238 [2024-07-26 18:06:59.228201] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.238 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.238 [2024-07-26 18:06:59.262892] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.238 [2024-07-26 18:06:59.294655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.496 [2024-07-26 18:06:59.393081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:33.496 [2024-07-26 18:06:59.393148] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:33.496 [2024-07-26 18:06:59.393173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.496 [2024-07-26 18:06:59.393196] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.496 [2024-07-26 18:06:59.393214] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:33.496 [2024-07-26 18:06:59.393280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.496 [2024-07-26 18:06:59.393327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.496 [2024-07-26 18:06:59.393333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.496 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.496 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:33.496 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:33.496 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:33.496 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:33.496 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.496 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:33.496 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:33.754 [2024-07-26 18:06:59.763376] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.754 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:34.012 18:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:34.272 [2024-07-26 18:07:00.282684] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.272 18:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:34.531 18:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:34.788 Malloc0 00:06:34.788 18:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:35.046 Delay0 00:06:35.046 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.304 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:35.562 NULL1 00:06:35.562 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:35.820 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1348718 00:06:35.820 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:35.820 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:35.820 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.820 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.230 Read completed with error (sct=0, sc=11) 00:06:37.230 18:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.230 18:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:37.230 18:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:37.487 true 00:06:37.487 18:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:37.488 18:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.424 18:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.424 18:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:38.424 18:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:38.682 true 00:06:38.682 18:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:38.682 18:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.939 18:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.197 18:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:39.197 18:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:39.455 true 00:06:39.455 18:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:39.455 18:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.712 18:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.970 18:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:39.970 18:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:40.228 true 00:06:40.228 18:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:40.228 18:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.164 18:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.427 18:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:41.427 18:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:41.684 true 00:06:41.684 18:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:41.684 18:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.942 18:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.199 18:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:42.199 18:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:42.456 true 00:06:42.456 18:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:42.456 18:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.391 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.649 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:43.649 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:43.906 true 00:06:43.906 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:43.906 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.165 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.422 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:44.422 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:44.681 true 00:06:44.681 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:44.681 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.614 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.872 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:45.872 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:46.130 true 00:06:46.130 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:46.130 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.389 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.648 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:46.648 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:46.648 true 00:06:46.907 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:46.907 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.840 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.840 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:47.840 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:48.098 true 00:06:48.098 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:48.098 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.356 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.614 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:48.614 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:48.872 true 00:06:48.872 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:48.872 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.809 18:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.066 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:50.066 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:50.324 true 00:06:50.324 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:50.324 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.582 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.840 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:50.840 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:51.098 true 00:06:51.098 18:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:51.098 18:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.063 18:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.320 18:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:52.320 18:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:52.320 true 00:06:52.578 18:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:52.578 18:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.836 18:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.836 18:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:52.836 18:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:53.093 true 00:06:53.093 18:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:53.093 18:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.030 18:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.287 18:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:54.287 18:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:54.545 true 00:06:54.545 18:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:54.545 18:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.802 18:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.059 18:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:55.059 18:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:55.316 true 00:06:55.316 18:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:55.316 18:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.251 18:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.508 18:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:56.508 18:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:56.766 true 00:06:56.766 18:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:56.766 18:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.024 18:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.283 18:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:57.283 18:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:57.283 true 00:06:57.540 18:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:57.540 18:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.477 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.477 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:58.477 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:58.734 true 00:06:58.734 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:58.734 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.991 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.249 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:59.249 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:59.507 true 00:06:59.507 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:06:59.507 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.765 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.023 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:00.023 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:00.281 true 00:07:00.281 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:07:00.281 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.216 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.217 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.475 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:01.475 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:01.733 true 00:07:01.733 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:07:01.733 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.991 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.249 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:02.249 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:02.507 true 00:07:02.507 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:07:02.507 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.441 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.700 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:03.700 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:03.958 true 00:07:03.958 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:07:03.958 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.216 18:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.474 18:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:04.474 18:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:04.732 true 00:07:04.732 18:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:07:04.732 18:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.989 18:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.248 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:05.248 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:05.506 true 00:07:05.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:07:05.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.889 Initializing NVMe Controllers 00:07:06.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:06.889 Controller IO queue size 128, less than required. 00:07:06.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.889 Controller IO queue size 128, less than required. 00:07:06.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:06.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:06.889 Initialization complete. Launching workers. 00:07:06.889 ======================================================== 00:07:06.889 Latency(us) 00:07:06.889 Device Information : IOPS MiB/s Average min max 00:07:06.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 684.96 0.33 96314.63 2619.57 1040634.76 00:07:06.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10346.22 5.05 12372.19 3346.90 448356.64 00:07:06.889 ======================================================== 00:07:06.889 Total : 11031.19 5.39 17584.46 2619.57 1040634.76 00:07:06.889 00:07:06.889 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.889 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:06.889 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:07.147 true 00:07:07.147 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348718 00:07:07.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1348718) - No such process 00:07:07.147 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1348718 00:07:07.147 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.404 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.662 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:07.662 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:07.662 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:07.662 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.662 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:07.920 null0 00:07:07.920 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.920 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.920 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:08.178 null1 00:07:08.178 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.178 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.178 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:08.437 null2 00:07:08.437 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.437 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.437 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:08.437 null3 00:07:08.697 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.697 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.697 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:08.697 null4 00:07:08.956 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.956 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.956 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:08.956 null5 00:07:08.956 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.956 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.956 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:09.214 null6 00:07:09.214 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.214 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.214 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:09.473 null7 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1352805 1352806 1352808 1352810 1352812 1352814 1352816 1352818 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.473 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.731 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.731 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.731 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.731 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.731 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.731 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.990 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.990 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.990 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.990 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.990 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.990 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.990 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.990 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.249 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.507 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.507 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.507 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.507 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.507 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.507 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.507 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.507 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.765 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.765 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.766 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.024 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.024 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.024 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.024 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.024 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.024 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.024 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.024 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.282 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.283 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.283 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.283 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.283 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.541 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.541 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.541 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.541 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.541 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.541 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.541 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.541 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.799 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.057 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.057 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.057 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.057 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.057 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.057 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.057 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.057 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.315 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.315 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.315 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.316 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.574 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.574 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.574 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.574 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.575 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.575 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.575 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.575 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.833 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.092 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.092 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.092 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.092 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.092 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.092 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.092 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.092 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.350 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.609 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.609 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.609 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.609 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.609 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.609 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.609 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.609 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.867 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.867 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.867 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.867 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.867 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.867 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.867 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.867 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.867 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.125 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.383 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.383 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.383 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.383 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.383 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.383 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.383 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.383 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.641 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.642 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.642 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.642 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.642 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.642 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.642 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.642 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.642 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.900 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.900 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.900 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.900 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.900 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.900 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.900 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.900 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:15.158 rmmod nvme_tcp 00:07:15.158 rmmod nvme_fabrics 00:07:15.158 rmmod nvme_keyring 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1348412 ']' 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1348412 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1348412 ']' 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1348412 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1348412 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1348412' 00:07:15.158 killing process with pid 1348412 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1348412 00:07:15.158 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1348412 00:07:15.416 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:15.416 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:15.416 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:15.417 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:15.417 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:15.417 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.417 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.417 18:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.949 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:17.949 00:07:17.949 real 0m46.524s 00:07:17.949 user 3m31.210s 00:07:17.949 sys 0m16.772s 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.950 ************************************ 00:07:17.950 END TEST nvmf_ns_hotplug_stress 00:07:17.950 ************************************ 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:17.950 ************************************ 00:07:17.950 START TEST nvmf_delete_subsystem 00:07:17.950 ************************************ 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:17.950 * Looking for test storage... 00:07:17.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:17.950 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.851 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.851 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:19.852 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:19.852 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:19.852 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:19.852 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:19.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:07:19.852 00:07:19.852 --- 10.0.0.2 ping statistics --- 00:07:19.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.852 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:07:19.852 00:07:19.852 --- 10.0.0.1 ping statistics --- 00:07:19.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.852 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1355573 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1355573 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1355573 ']' 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.852 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.852 [2024-07-26 18:07:45.822097] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:19.852 [2024-07-26 18:07:45.822205] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.852 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.852 [2024-07-26 18:07:45.860868] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:19.852 [2024-07-26 18:07:45.891004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.852 [2024-07-26 18:07:45.981796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.852 [2024-07-26 18:07:45.981862] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.852 [2024-07-26 18:07:45.981879] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.852 [2024-07-26 18:07:45.981894] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.852 [2024-07-26 18:07:45.981906] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.852 [2024-07-26 18:07:45.981969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.852 [2024-07-26 18:07:45.981976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.112 [2024-07-26 18:07:46.127531] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.112 [2024-07-26 18:07:46.143747] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.112 NULL1 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.112 Delay0 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1355719 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:20.112 18:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:20.112 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.112 [2024-07-26 18:07:46.218461] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:22.676 18:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.676 18:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.676 18:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 starting I/O failed: -6 00:07:22.676 Write completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 starting I/O failed: -6 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 starting I/O failed: -6 00:07:22.676 Write completed with error (sct=0, sc=8) 00:07:22.676 Write completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 starting I/O failed: -6 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 starting I/O failed: -6 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.676 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 [2024-07-26 18:07:48.482008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2273100 is same with the state(5) to be set 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 [2024-07-26 18:07:48.482699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272d40 is same with the state(5) to be set 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 starting I/O failed: -6 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 [2024-07-26 18:07:48.483235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7028000c00 is same with the state(5) to be set 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Write completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.677 Read completed with error (sct=0, sc=8) 00:07:22.678 Read completed with error (sct=0, sc=8) 00:07:22.678 Read completed with error (sct=0, sc=8) 00:07:22.678 Read completed with error (sct=0, sc=8) 00:07:22.678 Read completed with error (sct=0, sc=8) 00:07:22.678 Read completed with error (sct=0, sc=8) 00:07:22.678 Read completed with error (sct=0, sc=8) 00:07:22.678 Read completed with error (sct=0, sc=8) 00:07:22.678 Read completed with error (sct=0, sc=8) 00:07:22.678 Write completed with error (sct=0, sc=8) 00:07:23.618 [2024-07-26 18:07:49.439634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2290b40 is same with the state(5) to be set 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 [2024-07-26 18:07:49.485838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f702800d000 is same with the state(5) to be set 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 [2024-07-26 18:07:49.486197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f702800d7a0 is same with the state(5) to be set 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 [2024-07-26 18:07:49.486379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279300 is same with the state(5) to be set 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 Write completed with error (sct=0, sc=8) 00:07:23.618 Read completed with error (sct=0, sc=8) 00:07:23.618 [2024-07-26 18:07:49.487042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272f20 is same with the state(5) to be set 00:07:23.618 Initializing NVMe Controllers 00:07:23.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:23.618 Controller IO queue size 128, less than required. 00:07:23.618 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:23.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:23.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:23.618 Initialization complete. Launching workers. 00:07:23.618 ======================================================== 00:07:23.618 Latency(us) 00:07:23.618 Device Information : IOPS MiB/s Average min max 00:07:23.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.73 0.08 897747.63 725.68 1013798.27 00:07:23.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.68 0.09 882249.79 406.00 1013655.29 00:07:23.618 ======================================================== 00:07:23.618 Total : 346.40 0.17 889843.29 406.00 1013798.27 00:07:23.618 00:07:23.618 [2024-07-26 18:07:49.487442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2290b40 (9): Bad file descriptor 00:07:23.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:23.618 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.618 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:23.618 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1355719 00:07:23.618 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1355719 00:07:23.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1355719) - No such process 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1355719 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1355719 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1355719 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.879 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.879 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.879 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.879 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.879 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.879 [2024-07-26 18:07:50.012463] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.879 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.879 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.879 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.879 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.138 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.138 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1356132 00:07:24.138 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:24.138 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356132 00:07:24.138 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.138 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:24.138 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.138 [2024-07-26 18:07:50.073320] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:24.397 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.397 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356132 00:07:24.397 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.964 18:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.964 18:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356132 00:07:24.964 18:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.533 18:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.533 18:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356132 00:07:25.533 18:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.101 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.101 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356132 00:07:26.101 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.669 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.669 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356132 00:07:26.669 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.928 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.928 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356132 00:07:26.928 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.497 Initializing NVMe Controllers 00:07:27.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:27.497 Controller IO queue size 128, less than required. 00:07:27.497 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:27.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:27.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:27.497 Initialization complete. Launching workers. 00:07:27.497 ======================================================== 00:07:27.497 Latency(us) 00:07:27.497 Device Information : IOPS MiB/s Average min max 00:07:27.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004547.98 1000229.67 1041761.49 00:07:27.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004404.57 1000350.12 1011857.28 00:07:27.497 ======================================================== 00:07:27.497 Total : 256.00 0.12 1004476.28 1000229.67 1041761.49 00:07:27.497 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356132 00:07:27.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1356132) - No such process 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1356132 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:27.497 rmmod nvme_tcp 00:07:27.497 rmmod nvme_fabrics 00:07:27.497 rmmod nvme_keyring 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1355573 ']' 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1355573 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1355573 ']' 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1355573 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1355573 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1355573' 00:07:27.497 killing process with pid 1355573 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1355573 00:07:27.497 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1355573 00:07:27.756 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:27.756 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:27.756 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:27.756 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:27.756 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:27.756 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.756 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.756 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:30.296 00:07:30.296 real 0m12.326s 00:07:30.296 user 0m28.115s 00:07:30.296 sys 0m2.988s 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.296 ************************************ 00:07:30.296 END TEST nvmf_delete_subsystem 00:07:30.296 ************************************ 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.296 ************************************ 00:07:30.296 START TEST nvmf_host_management 00:07:30.296 ************************************ 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:30.296 * Looking for test storage... 00:07:30.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.296 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.296 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.297 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:30.297 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:30.297 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.297 18:07:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:32.204 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:32.205 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:32.205 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:32.205 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:32.205 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.205 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.205 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.205 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.205 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:32.205 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.205 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.205 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.205 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:32.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:07:32.205 00:07:32.205 --- 10.0.0.2 ping statistics --- 00:07:32.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.205 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:07:32.205 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:07:32.205 00:07:32.205 --- 10.0.0.1 ping statistics --- 00:07:32.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.205 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:07:32.205 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.205 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:32.205 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1358477 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1358477 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1358477 ']' 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.206 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.206 [2024-07-26 18:07:58.161292] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:32.206 [2024-07-26 18:07:58.161367] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.206 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.206 [2024-07-26 18:07:58.198213] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.206 [2024-07-26 18:07:58.230135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.206 [2024-07-26 18:07:58.321637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.206 [2024-07-26 18:07:58.321700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.206 [2024-07-26 18:07:58.321725] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.206 [2024-07-26 18:07:58.321746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.206 [2024-07-26 18:07:58.321765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.206 [2024-07-26 18:07:58.321859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.206 [2024-07-26 18:07:58.321975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.206 [2024-07-26 18:07:58.322052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:32.206 [2024-07-26 18:07:58.322068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.464 [2024-07-26 18:07:58.471430] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.464 Malloc0 00:07:32.464 [2024-07-26 18:07:58.531724] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1358638 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1358638 /var/tmp/bdevperf.sock 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1358638 ']' 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:32.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:32.464 { 00:07:32.464 "params": { 00:07:32.464 "name": "Nvme$subsystem", 00:07:32.464 "trtype": "$TEST_TRANSPORT", 00:07:32.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:32.464 "adrfam": "ipv4", 00:07:32.464 "trsvcid": "$NVMF_PORT", 00:07:32.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:32.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:32.464 "hdgst": ${hdgst:-false}, 00:07:32.464 "ddgst": ${ddgst:-false} 00:07:32.464 }, 00:07:32.464 "method": "bdev_nvme_attach_controller" 00:07:32.464 } 00:07:32.464 EOF 00:07:32.464 )") 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:32.464 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:32.464 "params": { 00:07:32.464 "name": "Nvme0", 00:07:32.464 "trtype": "tcp", 00:07:32.464 "traddr": "10.0.0.2", 00:07:32.464 "adrfam": "ipv4", 00:07:32.464 "trsvcid": "4420", 00:07:32.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:32.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:32.465 "hdgst": false, 00:07:32.465 "ddgst": false 00:07:32.465 }, 00:07:32.465 "method": "bdev_nvme_attach_controller" 00:07:32.465 }' 00:07:32.465 [2024-07-26 18:07:58.602504] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:32.465 [2024-07-26 18:07:58.602596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358638 ] 00:07:32.723 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.723 [2024-07-26 18:07:58.636266] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.723 [2024-07-26 18:07:58.666031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.723 [2024-07-26 18:07:58.753759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.983 Running I/O for 10 seconds... 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:07:32.983 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.243 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.243 [2024-07-26 18:07:59.358423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cbae0 is same with the state(5) to be set 00:07:33.243 [2024-07-26 18:07:59.358880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.243 [2024-07-26 18:07:59.358922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.243 [2024-07-26 18:07:59.358950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.243 [2024-07-26 18:07:59.358966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.243 [2024-07-26 18:07:59.358982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.243 [2024-07-26 18:07:59.358997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.243 [2024-07-26 18:07:59.359012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.243 [2024-07-26 18:07:59.359027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.243 [2024-07-26 18:07:59.359043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.359979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.359992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.244 [2024-07-26 18:07:59.360821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.244 [2024-07-26 18:07:59.360911] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9c95f0 was disconnected and freed. reset controller. 00:07:33.244 [2024-07-26 18:07:59.362097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:33.244 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.244 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:33.244 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.244 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.244 task offset: 71296 on job bdev=Nvme0n1 fails 00:07:33.244 00:07:33.244 Latency(us) 00:07:33.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.244 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:33.244 Job: Nvme0n1 ended in about 0.39 seconds with error 00:07:33.244 Verification LBA range: start 0x0 length 0x400 00:07:33.244 Nvme0n1 : 0.39 1305.28 81.58 163.16 0.00 42358.68 2669.99 38836.15 00:07:33.244 =================================================================================================================== 00:07:33.244 Total : 1305.28 81.58 163.16 0.00 42358.68 2669.99 38836.15 00:07:33.244 [2024-07-26 18:07:59.364001] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.244 [2024-07-26 18:07:59.364029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x597b50 (9): Bad file descriptor 00:07:33.244 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.244 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:33.504 [2024-07-26 18:07:59.419913] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:34.441 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1358638 00:07:34.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1358638) - No such process 00:07:34.441 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:34.441 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:34.441 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:34.441 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:34.441 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:34.441 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:34.441 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:34.441 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:34.441 { 00:07:34.441 "params": { 00:07:34.441 "name": "Nvme$subsystem", 00:07:34.441 "trtype": "$TEST_TRANSPORT", 00:07:34.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:34.441 "adrfam": "ipv4", 00:07:34.441 "trsvcid": "$NVMF_PORT", 00:07:34.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:34.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:34.441 "hdgst": ${hdgst:-false}, 00:07:34.441 "ddgst": ${ddgst:-false} 00:07:34.441 }, 00:07:34.441 "method": "bdev_nvme_attach_controller" 00:07:34.441 } 00:07:34.441 EOF 00:07:34.441 )") 00:07:34.441 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:34.441 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:34.441 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:34.441 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:34.442 "params": { 00:07:34.442 "name": "Nvme0", 00:07:34.442 "trtype": "tcp", 00:07:34.442 "traddr": "10.0.0.2", 00:07:34.442 "adrfam": "ipv4", 00:07:34.442 "trsvcid": "4420", 00:07:34.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:34.442 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:34.442 "hdgst": false, 00:07:34.442 "ddgst": false 00:07:34.442 }, 00:07:34.442 "method": "bdev_nvme_attach_controller" 00:07:34.442 }' 00:07:34.442 [2024-07-26 18:08:00.419734] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:34.442 [2024-07-26 18:08:00.419805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358801 ] 00:07:34.442 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.442 [2024-07-26 18:08:00.450915] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.442 [2024-07-26 18:08:00.479892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.442 [2024-07-26 18:08:00.570472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.699 Running I/O for 1 seconds... 00:07:36.072 00:07:36.072 Latency(us) 00:07:36.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.072 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:36.072 Verification LBA range: start 0x0 length 0x400 00:07:36.072 Nvme0n1 : 1.04 1228.58 76.79 0.00 0.00 51353.88 12039.21 44661.57 00:07:36.072 =================================================================================================================== 00:07:36.072 Total : 1228.58 76.79 0.00 0.00 51353.88 12039.21 44661.57 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:36.072 rmmod nvme_tcp 00:07:36.072 rmmod nvme_fabrics 00:07:36.072 rmmod nvme_keyring 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1358477 ']' 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1358477 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1358477 ']' 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1358477 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1358477 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1358477' 00:07:36.072 killing process with pid 1358477 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1358477 00:07:36.072 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1358477 00:07:36.329 [2024-07-26 18:08:02.318420] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:36.329 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:36.329 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:36.329 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:36.329 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:36.329 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:36.329 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.329 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.329 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:38.867 00:07:38.867 real 0m8.448s 00:07:38.867 user 0m18.915s 00:07:38.867 sys 0m2.589s 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.867 ************************************ 00:07:38.867 END TEST nvmf_host_management 00:07:38.867 ************************************ 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.867 ************************************ 00:07:38.867 START TEST nvmf_lvol 00:07:38.867 ************************************ 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:38.867 * Looking for test storage... 00:07:38.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:38.867 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:38.868 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:40.251 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:40.251 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.251 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.252 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:40.252 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:40.252 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:40.515 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:40.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:07:40.515 00:07:40.515 --- 10.0.0.2 ping statistics --- 00:07:40.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.515 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:07:40.515 00:07:40.515 --- 10.0.0.1 ping statistics --- 00:07:40.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.515 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1360996 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1360996 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1360996 ']' 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.515 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:40.515 [2024-07-26 18:08:06.595624] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:40.515 [2024-07-26 18:08:06.595705] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.515 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.515 [2024-07-26 18:08:06.632448] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.772 [2024-07-26 18:08:06.662832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:40.772 [2024-07-26 18:08:06.754443] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.772 [2024-07-26 18:08:06.754497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.772 [2024-07-26 18:08:06.754514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.772 [2024-07-26 18:08:06.754527] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.772 [2024-07-26 18:08:06.754539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.772 [2024-07-26 18:08:06.754595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.772 [2024-07-26 18:08:06.754627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.772 [2024-07-26 18:08:06.754630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.772 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.772 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:40.772 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:40.772 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:40.772 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:40.772 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.773 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:41.030 [2024-07-26 18:08:07.101926] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.030 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:41.287 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:41.287 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:41.544 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:41.544 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:41.801 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:42.058 18:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f9568bce-76d7-4d3d-bc7f-89a40c7f5730 00:07:42.059 18:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f9568bce-76d7-4d3d-bc7f-89a40c7f5730 lvol 20 00:07:42.316 18:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d28ac6d5-e0e7-4700-a56a-bd7ac915c001 00:07:42.316 18:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:42.574 18:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d28ac6d5-e0e7-4700-a56a-bd7ac915c001 00:07:42.832 18:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:43.091 [2024-07-26 18:08:09.184289] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.091 18:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:43.349 18:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1361313 00:07:43.349 18:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:43.349 18:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:43.349 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.725 18:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d28ac6d5-e0e7-4700-a56a-bd7ac915c001 MY_SNAPSHOT 00:07:44.725 18:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=492bb18e-dcf9-4851-9cb4-961db215c746 00:07:44.725 18:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d28ac6d5-e0e7-4700-a56a-bd7ac915c001 30 00:07:44.983 18:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 492bb18e-dcf9-4851-9cb4-961db215c746 MY_CLONE 00:07:45.241 18:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=095dd548-3bb0-4b08-a6fc-322d60961011 00:07:45.241 18:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 095dd548-3bb0-4b08-a6fc-322d60961011 00:07:46.179 18:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1361313 00:07:54.297 Initializing NVMe Controllers 00:07:54.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:54.297 Controller IO queue size 128, less than required. 00:07:54.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:54.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:54.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:54.297 Initialization complete. Launching workers. 00:07:54.297 ======================================================== 00:07:54.297 Latency(us) 00:07:54.297 Device Information : IOPS MiB/s Average min max 00:07:54.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10725.40 41.90 11943.51 1334.08 81747.23 00:07:54.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10640.50 41.56 12040.01 2066.57 68086.21 00:07:54.297 ======================================================== 00:07:54.297 Total : 21365.90 83.46 11991.57 1334.08 81747.23 00:07:54.297 00:07:54.297 18:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:54.297 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d28ac6d5-e0e7-4700-a56a-bd7ac915c001 00:07:54.297 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9568bce-76d7-4d3d-bc7f-89a40c7f5730 00:07:54.555 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:54.555 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:54.555 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:54.555 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:54.555 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:54.555 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:54.555 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:54.555 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:54.555 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:54.555 rmmod nvme_tcp 00:07:54.555 rmmod nvme_fabrics 00:07:54.815 rmmod nvme_keyring 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1360996 ']' 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1360996 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1360996 ']' 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1360996 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1360996 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1360996' 00:07:54.815 killing process with pid 1360996 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1360996 00:07:54.815 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1360996 00:07:55.074 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:55.074 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:55.074 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:55.074 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.074 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:55.074 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.074 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.074 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.979 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.979 00:07:56.979 real 0m18.632s 00:07:56.979 user 1m4.243s 00:07:56.979 sys 0m5.337s 00:07:56.979 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.979 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:56.979 ************************************ 00:07:56.979 END TEST nvmf_lvol 00:07:56.979 ************************************ 00:07:56.979 18:08:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:56.979 18:08:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:56.979 18:08:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.979 18:08:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.979 ************************************ 00:07:56.979 START TEST nvmf_lvs_grow 00:07:56.979 ************************************ 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:57.238 * Looking for test storage... 00:07:57.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.238 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.239 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:59.144 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:59.144 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:59.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:59.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.144 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:59.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:07:59.403 00:07:59.403 --- 10.0.0.2 ping statistics --- 00:07:59.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.403 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:07:59.403 00:07:59.403 --- 10.0.0.1 ping statistics --- 00:07:59.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.403 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1364581 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1364581 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1364581 ']' 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.403 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:59.403 [2024-07-26 18:08:25.424920] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:59.403 [2024-07-26 18:08:25.424998] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.403 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.403 [2024-07-26 18:08:25.461521] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:59.403 [2024-07-26 18:08:25.493268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.661 [2024-07-26 18:08:25.588208] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.661 [2024-07-26 18:08:25.588262] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.661 [2024-07-26 18:08:25.588277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.661 [2024-07-26 18:08:25.588289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.661 [2024-07-26 18:08:25.588300] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.661 [2024-07-26 18:08:25.588327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.661 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.661 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:59.661 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:59.661 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:59.661 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:59.661 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.661 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:59.920 [2024-07-26 18:08:25.939805] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:59.920 ************************************ 00:07:59.920 START TEST lvs_grow_clean 00:07:59.920 ************************************ 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:59.920 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:00.178 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:00.178 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:00.438 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3196b020-adcc-49bd-bb55-a2a2630ece39 00:08:00.438 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3196b020-adcc-49bd-bb55-a2a2630ece39 00:08:00.438 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:00.697 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:00.697 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:00.697 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3196b020-adcc-49bd-bb55-a2a2630ece39 lvol 150 00:08:00.959 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2ba7ae73-dc7f-4612-bb10-004e0b1bbeb4 00:08:00.959 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:00.959 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:01.239 [2024-07-26 18:08:27.247307] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:01.239 [2024-07-26 18:08:27.247411] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:01.239 true 00:08:01.239 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3196b020-adcc-49bd-bb55-a2a2630ece39 00:08:01.239 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:01.506 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:01.506 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:01.765 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2ba7ae73-dc7f-4612-bb10-004e0b1bbeb4 00:08:02.025 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:02.283 [2024-07-26 18:08:28.242430] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.283 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.541 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1365017 00:08:02.541 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.541 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:02.541 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1365017 /var/tmp/bdevperf.sock 00:08:02.541 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1365017 ']' 00:08:02.541 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:02.541 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.541 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:02.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:02.541 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.541 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:02.541 [2024-07-26 18:08:28.539179] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:02.541 [2024-07-26 18:08:28.539267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1365017 ] 00:08:02.541 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.541 [2024-07-26 18:08:28.571447] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.541 [2024-07-26 18:08:28.601561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.799 [2024-07-26 18:08:28.693273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.799 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.799 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:02.799 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:03.366 Nvme0n1 00:08:03.366 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:03.624 [ 00:08:03.624 { 00:08:03.624 "name": "Nvme0n1", 00:08:03.624 "aliases": [ 00:08:03.624 "2ba7ae73-dc7f-4612-bb10-004e0b1bbeb4" 00:08:03.624 ], 00:08:03.624 "product_name": "NVMe disk", 00:08:03.624 "block_size": 4096, 00:08:03.624 "num_blocks": 38912, 00:08:03.624 "uuid": "2ba7ae73-dc7f-4612-bb10-004e0b1bbeb4", 00:08:03.624 "assigned_rate_limits": { 00:08:03.624 "rw_ios_per_sec": 0, 00:08:03.624 "rw_mbytes_per_sec": 0, 00:08:03.624 "r_mbytes_per_sec": 0, 00:08:03.624 "w_mbytes_per_sec": 0 00:08:03.624 }, 00:08:03.624 "claimed": false, 00:08:03.624 "zoned": false, 00:08:03.624 "supported_io_types": { 00:08:03.624 "read": true, 00:08:03.624 "write": true, 00:08:03.624 "unmap": true, 00:08:03.624 "flush": true, 00:08:03.624 "reset": true, 00:08:03.624 "nvme_admin": true, 00:08:03.624 "nvme_io": true, 00:08:03.624 "nvme_io_md": false, 00:08:03.624 "write_zeroes": true, 00:08:03.624 "zcopy": false, 00:08:03.624 "get_zone_info": false, 00:08:03.624 "zone_management": false, 00:08:03.624 "zone_append": false, 00:08:03.624 "compare": true, 00:08:03.624 "compare_and_write": true, 00:08:03.624 "abort": true, 00:08:03.624 "seek_hole": false, 00:08:03.624 "seek_data": false, 00:08:03.624 "copy": true, 00:08:03.624 "nvme_iov_md": false 00:08:03.624 }, 00:08:03.624 "memory_domains": [ 00:08:03.624 { 00:08:03.624 "dma_device_id": "system", 00:08:03.624 "dma_device_type": 1 00:08:03.624 } 00:08:03.624 ], 00:08:03.624 "driver_specific": { 00:08:03.624 "nvme": [ 00:08:03.624 { 00:08:03.624 "trid": { 00:08:03.624 "trtype": "TCP", 00:08:03.624 "adrfam": "IPv4", 00:08:03.624 "traddr": "10.0.0.2", 00:08:03.624 "trsvcid": "4420", 00:08:03.624 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:03.624 }, 00:08:03.624 "ctrlr_data": { 00:08:03.624 "cntlid": 1, 00:08:03.624 "vendor_id": "0x8086", 00:08:03.624 "model_number": "SPDK bdev Controller", 00:08:03.624 "serial_number": "SPDK0", 00:08:03.624 "firmware_revision": "24.09", 00:08:03.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:03.624 "oacs": { 00:08:03.624 "security": 0, 00:08:03.624 "format": 0, 00:08:03.624 "firmware": 0, 00:08:03.624 "ns_manage": 0 00:08:03.624 }, 00:08:03.624 "multi_ctrlr": true, 00:08:03.624 "ana_reporting": false 00:08:03.624 }, 00:08:03.624 "vs": { 00:08:03.624 "nvme_version": "1.3" 00:08:03.624 }, 00:08:03.624 "ns_data": { 00:08:03.624 "id": 1, 00:08:03.624 "can_share": true 00:08:03.624 } 00:08:03.624 } 00:08:03.624 ], 00:08:03.624 "mp_policy": "active_passive" 00:08:03.624 } 00:08:03.624 } 00:08:03.624 ] 00:08:03.624 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1365153 00:08:03.624 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:03.624 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:03.624 Running I/O for 10 seconds... 00:08:04.560 Latency(us) 00:08:04.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.560 Nvme0n1 : 1.00 14386.00 56.20 0.00 0.00 0.00 0.00 0.00 00:08:04.560 =================================================================================================================== 00:08:04.560 Total : 14386.00 56.20 0.00 0.00 0.00 0.00 0.00 00:08:04.560 00:08:05.496 18:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3196b020-adcc-49bd-bb55-a2a2630ece39 00:08:05.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.496 Nvme0n1 : 2.00 14511.00 56.68 0.00 0.00 0.00 0.00 0.00 00:08:05.496 =================================================================================================================== 00:08:05.496 Total : 14511.00 56.68 0.00 0.00 0.00 0.00 0.00 00:08:05.496 00:08:05.753 true 00:08:05.753 18:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3196b020-adcc-49bd-bb55-a2a2630ece39 00:08:05.753 18:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:06.011 18:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:06.011 18:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:06.011 18:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1365153 00:08:06.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.578 Nvme0n1 : 3.00 14612.33 57.08 0.00 0.00 0.00 0.00 0.00 00:08:06.578 =================================================================================================================== 00:08:06.578 Total : 14612.33 57.08 0.00 0.00 0.00 0.00 0.00 00:08:06.578 00:08:07.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.516 Nvme0n1 : 4.00 14732.25 57.55 0.00 0.00 0.00 0.00 0.00 00:08:07.516 =================================================================================================================== 00:08:07.516 Total : 14732.25 57.55 0.00 0.00 0.00 0.00 0.00 00:08:07.516 00:08:08.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.893 Nvme0n1 : 5.00 14799.00 57.81 0.00 0.00 0.00 0.00 0.00 00:08:08.894 =================================================================================================================== 00:08:08.894 Total : 14799.00 57.81 0.00 0.00 0.00 0.00 0.00 00:08:08.894 00:08:09.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.832 Nvme0n1 : 6.00 14868.83 58.08 0.00 0.00 0.00 0.00 0.00 00:08:09.832 =================================================================================================================== 00:08:09.832 Total : 14868.83 58.08 0.00 0.00 0.00 0.00 0.00 00:08:09.832 00:08:10.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.771 Nvme0n1 : 7.00 14931.14 58.32 0.00 0.00 0.00 0.00 0.00 00:08:10.771 =================================================================================================================== 00:08:10.771 Total : 14931.14 58.32 0.00 0.00 0.00 0.00 0.00 00:08:10.771 00:08:11.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.709 Nvme0n1 : 8.00 14968.12 58.47 0.00 0.00 0.00 0.00 0.00 00:08:11.709 =================================================================================================================== 00:08:11.709 Total : 14968.12 58.47 0.00 0.00 0.00 0.00 0.00 00:08:11.709 00:08:12.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.646 Nvme0n1 : 9.00 14998.67 58.59 0.00 0.00 0.00 0.00 0.00 00:08:12.646 =================================================================================================================== 00:08:12.646 Total : 14998.67 58.59 0.00 0.00 0.00 0.00 0.00 00:08:12.646 00:08:13.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.585 Nvme0n1 : 10.00 15039.90 58.75 0.00 0.00 0.00 0.00 0.00 00:08:13.585 =================================================================================================================== 00:08:13.585 Total : 15039.90 58.75 0.00 0.00 0.00 0.00 0.00 00:08:13.585 00:08:13.585 00:08:13.585 Latency(us) 00:08:13.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.585 Nvme0n1 : 10.01 15040.24 58.75 0.00 0.00 8504.26 2500.08 16699.54 00:08:13.585 =================================================================================================================== 00:08:13.585 Total : 15040.24 58.75 0.00 0.00 8504.26 2500.08 16699.54 00:08:13.585 0 00:08:13.585 18:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1365017 00:08:13.585 18:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1365017 ']' 00:08:13.585 18:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1365017 00:08:13.585 18:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:13.585 18:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.585 18:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1365017 00:08:13.585 18:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:13.585 18:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:13.585 18:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1365017' 00:08:13.585 killing process with pid 1365017 00:08:13.585 18:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1365017 00:08:13.585 Received shutdown signal, test time was about 10.000000 seconds 00:08:13.585 00:08:13.585 Latency(us) 00:08:13.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.585 =================================================================================================================== 00:08:13.585 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:13.585 18:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1365017 00:08:13.843 18:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.101 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:14.359 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3196b020-adcc-49bd-bb55-a2a2630ece39 00:08:14.359 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:14.617 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:14.617 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:14.617 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:14.876 [2024-07-26 18:08:40.926828] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:14.876 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3196b020-adcc-49bd-bb55-a2a2630ece39 00:08:14.876 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:14.876 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3196b020-adcc-49bd-bb55-a2a2630ece39 00:08:14.876 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.876 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.876 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.876 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.876 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.876 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.876 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.876 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:14.876 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3196b020-adcc-49bd-bb55-a2a2630ece39 00:08:15.135 request: 00:08:15.135 { 00:08:15.135 "uuid": "3196b020-adcc-49bd-bb55-a2a2630ece39", 00:08:15.135 "method": "bdev_lvol_get_lvstores", 00:08:15.135 "req_id": 1 00:08:15.135 } 00:08:15.135 Got JSON-RPC error response 00:08:15.135 response: 00:08:15.135 { 00:08:15.135 "code": -19, 00:08:15.135 "message": "No such device" 00:08:15.135 } 00:08:15.135 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:15.135 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:15.135 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:15.135 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:15.135 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:15.395 aio_bdev 00:08:15.395 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2ba7ae73-dc7f-4612-bb10-004e0b1bbeb4 00:08:15.395 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=2ba7ae73-dc7f-4612-bb10-004e0b1bbeb4 00:08:15.395 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:15.395 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:15.395 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:15.395 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:15.395 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:15.655 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2ba7ae73-dc7f-4612-bb10-004e0b1bbeb4 -t 2000 00:08:15.915 [ 00:08:15.915 { 00:08:15.915 "name": "2ba7ae73-dc7f-4612-bb10-004e0b1bbeb4", 00:08:15.915 "aliases": [ 00:08:15.915 "lvs/lvol" 00:08:15.915 ], 00:08:15.915 "product_name": "Logical Volume", 00:08:15.915 "block_size": 4096, 00:08:15.915 "num_blocks": 38912, 00:08:15.915 "uuid": "2ba7ae73-dc7f-4612-bb10-004e0b1bbeb4", 00:08:15.915 "assigned_rate_limits": { 00:08:15.915 "rw_ios_per_sec": 0, 00:08:15.915 "rw_mbytes_per_sec": 0, 00:08:15.915 "r_mbytes_per_sec": 0, 00:08:15.915 "w_mbytes_per_sec": 0 00:08:15.915 }, 00:08:15.915 "claimed": false, 00:08:15.915 "zoned": false, 00:08:15.915 "supported_io_types": { 00:08:15.915 "read": true, 00:08:15.915 "write": true, 00:08:15.915 "unmap": true, 00:08:15.915 "flush": false, 00:08:15.915 "reset": true, 00:08:15.915 "nvme_admin": false, 00:08:15.915 "nvme_io": false, 00:08:15.915 "nvme_io_md": false, 00:08:15.915 "write_zeroes": true, 00:08:15.915 "zcopy": false, 00:08:15.915 "get_zone_info": false, 00:08:15.916 "zone_management": false, 00:08:15.916 "zone_append": false, 00:08:15.916 "compare": false, 00:08:15.916 "compare_and_write": false, 00:08:15.916 "abort": false, 00:08:15.916 "seek_hole": true, 00:08:15.916 "seek_data": true, 00:08:15.916 "copy": false, 00:08:15.916 "nvme_iov_md": false 00:08:15.916 }, 00:08:15.916 "driver_specific": { 00:08:15.916 "lvol": { 00:08:15.916 "lvol_store_uuid": "3196b020-adcc-49bd-bb55-a2a2630ece39", 00:08:15.916 "base_bdev": "aio_bdev", 00:08:15.916 "thin_provision": false, 00:08:15.916 "num_allocated_clusters": 38, 00:08:15.916 "snapshot": false, 00:08:15.916 "clone": false, 00:08:15.916 "esnap_clone": false 00:08:15.916 } 00:08:15.916 } 00:08:15.916 } 00:08:15.916 ] 00:08:15.916 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:15.916 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3196b020-adcc-49bd-bb55-a2a2630ece39 00:08:15.916 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:16.176 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:16.176 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3196b020-adcc-49bd-bb55-a2a2630ece39 00:08:16.176 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:16.434 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:16.434 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2ba7ae73-dc7f-4612-bb10-004e0b1bbeb4 00:08:16.710 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3196b020-adcc-49bd-bb55-a2a2630ece39 00:08:17.010 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:17.271 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.271 00:08:17.271 real 0m17.236s 00:08:17.271 user 0m16.416s 00:08:17.271 sys 0m2.024s 00:08:17.271 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.271 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:17.271 ************************************ 00:08:17.271 END TEST lvs_grow_clean 00:08:17.271 ************************************ 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.272 ************************************ 00:08:17.272 START TEST lvs_grow_dirty 00:08:17.272 ************************************ 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.272 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.530 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:17.530 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:17.788 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:17.788 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:17.788 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:18.046 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:18.046 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:18.046 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa lvol 150 00:08:18.306 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6be32008-9cbc-48ea-8743-635bdf0fca89 00:08:18.306 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:18.306 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:18.565 [2024-07-26 18:08:44.564398] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:18.565 [2024-07-26 18:08:44.564495] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:18.565 true 00:08:18.565 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:18.565 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:18.823 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:18.823 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:19.082 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6be32008-9cbc-48ea-8743-635bdf0fca89 00:08:19.343 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:19.602 [2024-07-26 18:08:45.547456] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.602 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:19.861 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1367079 00:08:19.862 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:19.862 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:19.862 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1367079 /var/tmp/bdevperf.sock 00:08:19.862 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1367079 ']' 00:08:19.862 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:19.862 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.862 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:19.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:19.862 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.862 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:19.862 [2024-07-26 18:08:45.847229] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:19.862 [2024-07-26 18:08:45.847318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1367079 ] 00:08:19.862 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.862 [2024-07-26 18:08:45.881490] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:19.862 [2024-07-26 18:08:45.912477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.120 [2024-07-26 18:08:46.006636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.120 18:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.120 18:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:20.120 18:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:20.688 Nvme0n1 00:08:20.688 18:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:20.688 [ 00:08:20.688 { 00:08:20.688 "name": "Nvme0n1", 00:08:20.688 "aliases": [ 00:08:20.688 "6be32008-9cbc-48ea-8743-635bdf0fca89" 00:08:20.688 ], 00:08:20.688 "product_name": "NVMe disk", 00:08:20.688 "block_size": 4096, 00:08:20.688 "num_blocks": 38912, 00:08:20.688 "uuid": "6be32008-9cbc-48ea-8743-635bdf0fca89", 00:08:20.688 "assigned_rate_limits": { 00:08:20.688 "rw_ios_per_sec": 0, 00:08:20.688 "rw_mbytes_per_sec": 0, 00:08:20.688 "r_mbytes_per_sec": 0, 00:08:20.688 "w_mbytes_per_sec": 0 00:08:20.688 }, 00:08:20.688 "claimed": false, 00:08:20.688 "zoned": false, 00:08:20.688 "supported_io_types": { 00:08:20.688 "read": true, 00:08:20.688 "write": true, 00:08:20.688 "unmap": true, 00:08:20.688 "flush": true, 00:08:20.688 "reset": true, 00:08:20.688 "nvme_admin": true, 00:08:20.688 "nvme_io": true, 00:08:20.688 "nvme_io_md": false, 00:08:20.688 "write_zeroes": true, 00:08:20.688 "zcopy": false, 00:08:20.688 "get_zone_info": false, 00:08:20.688 "zone_management": false, 00:08:20.688 "zone_append": false, 00:08:20.689 "compare": true, 00:08:20.689 "compare_and_write": true, 00:08:20.689 "abort": true, 00:08:20.689 "seek_hole": false, 00:08:20.689 "seek_data": false, 00:08:20.689 "copy": true, 00:08:20.689 "nvme_iov_md": false 00:08:20.689 }, 00:08:20.689 "memory_domains": [ 00:08:20.689 { 00:08:20.689 "dma_device_id": "system", 00:08:20.689 "dma_device_type": 1 00:08:20.689 } 00:08:20.689 ], 00:08:20.689 "driver_specific": { 00:08:20.689 "nvme": [ 00:08:20.689 { 00:08:20.689 "trid": { 00:08:20.689 "trtype": "TCP", 00:08:20.689 "adrfam": "IPv4", 00:08:20.689 "traddr": "10.0.0.2", 00:08:20.689 "trsvcid": "4420", 00:08:20.689 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:20.689 }, 00:08:20.689 "ctrlr_data": { 00:08:20.689 "cntlid": 1, 00:08:20.689 "vendor_id": "0x8086", 00:08:20.689 "model_number": "SPDK bdev Controller", 00:08:20.689 "serial_number": "SPDK0", 00:08:20.689 "firmware_revision": "24.09", 00:08:20.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:20.689 "oacs": { 00:08:20.689 "security": 0, 00:08:20.689 "format": 0, 00:08:20.689 "firmware": 0, 00:08:20.689 "ns_manage": 0 00:08:20.689 }, 00:08:20.689 "multi_ctrlr": true, 00:08:20.689 "ana_reporting": false 00:08:20.689 }, 00:08:20.689 "vs": { 00:08:20.689 "nvme_version": "1.3" 00:08:20.689 }, 00:08:20.689 "ns_data": { 00:08:20.689 "id": 1, 00:08:20.689 "can_share": true 00:08:20.689 } 00:08:20.689 } 00:08:20.689 ], 00:08:20.689 "mp_policy": "active_passive" 00:08:20.689 } 00:08:20.689 } 00:08:20.689 ] 00:08:20.689 18:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1367215 00:08:20.689 18:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:20.689 18:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:20.947 Running I/O for 10 seconds... 00:08:21.881 Latency(us) 00:08:21.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.881 Nvme0n1 : 1.00 14679.00 57.34 0.00 0.00 0.00 0.00 0.00 00:08:21.881 =================================================================================================================== 00:08:21.881 Total : 14679.00 57.34 0.00 0.00 0.00 0.00 0.00 00:08:21.881 00:08:22.816 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:22.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.816 Nvme0n1 : 2.00 14968.50 58.47 0.00 0.00 0.00 0.00 0.00 00:08:22.816 =================================================================================================================== 00:08:22.816 Total : 14968.50 58.47 0.00 0.00 0.00 0.00 0.00 00:08:22.816 00:08:23.074 true 00:08:23.074 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:23.074 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:23.332 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:23.332 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:23.332 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1367215 00:08:23.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.899 Nvme0n1 : 3.00 15125.33 59.08 0.00 0.00 0.00 0.00 0.00 00:08:23.899 =================================================================================================================== 00:08:23.899 Total : 15125.33 59.08 0.00 0.00 0.00 0.00 0.00 00:08:23.899 00:08:24.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.835 Nvme0n1 : 4.00 15118.00 59.05 0.00 0.00 0.00 0.00 0.00 00:08:24.835 =================================================================================================================== 00:08:24.835 Total : 15118.00 59.05 0.00 0.00 0.00 0.00 0.00 00:08:24.835 00:08:26.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.208 Nvme0n1 : 5.00 15207.80 59.41 0.00 0.00 0.00 0.00 0.00 00:08:26.208 =================================================================================================================== 00:08:26.208 Total : 15207.80 59.41 0.00 0.00 0.00 0.00 0.00 00:08:26.208 00:08:27.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.142 Nvme0n1 : 6.00 15083.50 58.92 0.00 0.00 0.00 0.00 0.00 00:08:27.142 =================================================================================================================== 00:08:27.142 Total : 15083.50 58.92 0.00 0.00 0.00 0.00 0.00 00:08:27.142 00:08:28.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.074 Nvme0n1 : 7.00 14993.86 58.57 0.00 0.00 0.00 0.00 0.00 00:08:28.074 =================================================================================================================== 00:08:28.074 Total : 14993.86 58.57 0.00 0.00 0.00 0.00 0.00 00:08:28.074 00:08:29.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.007 Nvme0n1 : 8.00 14918.62 58.28 0.00 0.00 0.00 0.00 0.00 00:08:29.007 =================================================================================================================== 00:08:29.007 Total : 14918.62 58.28 0.00 0.00 0.00 0.00 0.00 00:08:29.007 00:08:29.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.940 Nvme0n1 : 9.00 14866.33 58.07 0.00 0.00 0.00 0.00 0.00 00:08:29.940 =================================================================================================================== 00:08:29.940 Total : 14866.33 58.07 0.00 0.00 0.00 0.00 0.00 00:08:29.940 00:08:30.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.872 Nvme0n1 : 10.00 14822.90 57.90 0.00 0.00 0.00 0.00 0.00 00:08:30.872 =================================================================================================================== 00:08:30.872 Total : 14822.90 57.90 0.00 0.00 0.00 0.00 0.00 00:08:30.872 00:08:30.872 00:08:30.872 Latency(us) 00:08:30.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.872 Nvme0n1 : 10.01 14821.92 57.90 0.00 0.00 8628.37 2694.26 18641.35 00:08:30.872 =================================================================================================================== 00:08:30.872 Total : 14821.92 57.90 0.00 0.00 8628.37 2694.26 18641.35 00:08:30.872 0 00:08:30.872 18:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1367079 00:08:30.872 18:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1367079 ']' 00:08:30.872 18:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1367079 00:08:30.872 18:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:30.872 18:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.872 18:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1367079 00:08:30.873 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:30.873 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:30.873 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1367079' 00:08:30.873 killing process with pid 1367079 00:08:30.873 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1367079 00:08:30.873 Received shutdown signal, test time was about 10.000000 seconds 00:08:30.873 00:08:30.873 Latency(us) 00:08:30.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.873 =================================================================================================================== 00:08:30.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:30.873 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1367079 00:08:31.130 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.387 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:31.645 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:31.645 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:31.903 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:31.903 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:31.903 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1364581 00:08:31.903 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1364581 00:08:31.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1364581 Killed "${NVMF_APP[@]}" "$@" 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1368551 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1368551 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1368551 ']' 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.903 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:32.163 [2024-07-26 18:08:58.063209] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:32.163 [2024-07-26 18:08:58.063292] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.163 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.163 [2024-07-26 18:08:58.108659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:32.163 [2024-07-26 18:08:58.138972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.163 [2024-07-26 18:08:58.233285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.163 [2024-07-26 18:08:58.233365] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.163 [2024-07-26 18:08:58.233382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.163 [2024-07-26 18:08:58.233396] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.163 [2024-07-26 18:08:58.233407] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.163 [2024-07-26 18:08:58.233437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.468 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.468 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:32.468 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.468 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:32.468 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:32.468 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.468 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.726 [2024-07-26 18:08:58.612521] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:32.726 [2024-07-26 18:08:58.612680] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:32.726 [2024-07-26 18:08:58.612731] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:32.726 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:32.726 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6be32008-9cbc-48ea-8743-635bdf0fca89 00:08:32.726 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=6be32008-9cbc-48ea-8743-635bdf0fca89 00:08:32.727 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:32.727 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:32.727 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:32.727 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:32.727 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:32.984 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6be32008-9cbc-48ea-8743-635bdf0fca89 -t 2000 00:08:33.242 [ 00:08:33.243 { 00:08:33.243 "name": "6be32008-9cbc-48ea-8743-635bdf0fca89", 00:08:33.243 "aliases": [ 00:08:33.243 "lvs/lvol" 00:08:33.243 ], 00:08:33.243 "product_name": "Logical Volume", 00:08:33.243 "block_size": 4096, 00:08:33.243 "num_blocks": 38912, 00:08:33.243 "uuid": "6be32008-9cbc-48ea-8743-635bdf0fca89", 00:08:33.243 "assigned_rate_limits": { 00:08:33.243 "rw_ios_per_sec": 0, 00:08:33.243 "rw_mbytes_per_sec": 0, 00:08:33.243 "r_mbytes_per_sec": 0, 00:08:33.243 "w_mbytes_per_sec": 0 00:08:33.243 }, 00:08:33.243 "claimed": false, 00:08:33.243 "zoned": false, 00:08:33.243 "supported_io_types": { 00:08:33.243 "read": true, 00:08:33.243 "write": true, 00:08:33.243 "unmap": true, 00:08:33.243 "flush": false, 00:08:33.243 "reset": true, 00:08:33.243 "nvme_admin": false, 00:08:33.243 "nvme_io": false, 00:08:33.243 "nvme_io_md": false, 00:08:33.243 "write_zeroes": true, 00:08:33.243 "zcopy": false, 00:08:33.243 "get_zone_info": false, 00:08:33.243 "zone_management": false, 00:08:33.243 "zone_append": false, 00:08:33.243 "compare": false, 00:08:33.243 "compare_and_write": false, 00:08:33.243 "abort": false, 00:08:33.243 "seek_hole": true, 00:08:33.243 "seek_data": true, 00:08:33.243 "copy": false, 00:08:33.243 "nvme_iov_md": false 00:08:33.243 }, 00:08:33.243 "driver_specific": { 00:08:33.243 "lvol": { 00:08:33.243 "lvol_store_uuid": "b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa", 00:08:33.243 "base_bdev": "aio_bdev", 00:08:33.243 "thin_provision": false, 00:08:33.243 "num_allocated_clusters": 38, 00:08:33.243 "snapshot": false, 00:08:33.243 "clone": false, 00:08:33.243 "esnap_clone": false 00:08:33.243 } 00:08:33.243 } 00:08:33.243 } 00:08:33.243 ] 00:08:33.243 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:33.243 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:33.243 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:33.501 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:33.501 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:33.501 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:33.759 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:33.759 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:34.018 [2024-07-26 18:08:59.937717] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:34.018 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:34.018 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:34.018 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:34.018 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.018 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.018 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.018 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.018 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.018 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.018 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.018 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:34.018 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:34.276 request: 00:08:34.276 { 00:08:34.276 "uuid": "b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa", 00:08:34.276 "method": "bdev_lvol_get_lvstores", 00:08:34.276 "req_id": 1 00:08:34.276 } 00:08:34.276 Got JSON-RPC error response 00:08:34.276 response: 00:08:34.276 { 00:08:34.276 "code": -19, 00:08:34.276 "message": "No such device" 00:08:34.276 } 00:08:34.276 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:34.276 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.276 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.276 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.276 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:34.536 aio_bdev 00:08:34.536 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6be32008-9cbc-48ea-8743-635bdf0fca89 00:08:34.536 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=6be32008-9cbc-48ea-8743-635bdf0fca89 00:08:34.536 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:34.536 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:34.536 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:34.536 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:34.536 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:34.794 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6be32008-9cbc-48ea-8743-635bdf0fca89 -t 2000 00:08:35.053 [ 00:08:35.053 { 00:08:35.053 "name": "6be32008-9cbc-48ea-8743-635bdf0fca89", 00:08:35.053 "aliases": [ 00:08:35.053 "lvs/lvol" 00:08:35.053 ], 00:08:35.053 "product_name": "Logical Volume", 00:08:35.053 "block_size": 4096, 00:08:35.053 "num_blocks": 38912, 00:08:35.053 "uuid": "6be32008-9cbc-48ea-8743-635bdf0fca89", 00:08:35.053 "assigned_rate_limits": { 00:08:35.053 "rw_ios_per_sec": 0, 00:08:35.053 "rw_mbytes_per_sec": 0, 00:08:35.053 "r_mbytes_per_sec": 0, 00:08:35.053 "w_mbytes_per_sec": 0 00:08:35.053 }, 00:08:35.053 "claimed": false, 00:08:35.053 "zoned": false, 00:08:35.053 "supported_io_types": { 00:08:35.053 "read": true, 00:08:35.053 "write": true, 00:08:35.053 "unmap": true, 00:08:35.053 "flush": false, 00:08:35.053 "reset": true, 00:08:35.053 "nvme_admin": false, 00:08:35.053 "nvme_io": false, 00:08:35.053 "nvme_io_md": false, 00:08:35.053 "write_zeroes": true, 00:08:35.053 "zcopy": false, 00:08:35.053 "get_zone_info": false, 00:08:35.053 "zone_management": false, 00:08:35.053 "zone_append": false, 00:08:35.053 "compare": false, 00:08:35.053 "compare_and_write": false, 00:08:35.053 "abort": false, 00:08:35.053 "seek_hole": true, 00:08:35.053 "seek_data": true, 00:08:35.053 "copy": false, 00:08:35.053 "nvme_iov_md": false 00:08:35.053 }, 00:08:35.053 "driver_specific": { 00:08:35.053 "lvol": { 00:08:35.053 "lvol_store_uuid": "b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa", 00:08:35.053 "base_bdev": "aio_bdev", 00:08:35.053 "thin_provision": false, 00:08:35.053 "num_allocated_clusters": 38, 00:08:35.053 "snapshot": false, 00:08:35.053 "clone": false, 00:08:35.053 "esnap_clone": false 00:08:35.053 } 00:08:35.053 } 00:08:35.053 } 00:08:35.053 ] 00:08:35.053 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:35.053 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:35.053 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:35.311 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:35.311 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:35.312 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:35.570 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:35.570 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6be32008-9cbc-48ea-8743-635bdf0fca89 00:08:35.829 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b80dbd74-69ca-42b9-a20a-4e8a9cdf0baa 00:08:36.087 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:36.345 00:08:36.345 real 0m19.052s 00:08:36.345 user 0m47.813s 00:08:36.345 sys 0m4.704s 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:36.345 ************************************ 00:08:36.345 END TEST lvs_grow_dirty 00:08:36.345 ************************************ 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:36.345 nvmf_trace.0 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:36.345 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:36.346 rmmod nvme_tcp 00:08:36.346 rmmod nvme_fabrics 00:08:36.346 rmmod nvme_keyring 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1368551 ']' 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1368551 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1368551 ']' 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1368551 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1368551 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1368551' 00:08:36.346 killing process with pid 1368551 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1368551 00:08:36.346 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1368551 00:08:36.605 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:36.605 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:36.605 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:36.605 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.605 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:36.605 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.605 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.605 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.139 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:39.139 00:08:39.139 real 0m41.642s 00:08:39.139 user 1m10.003s 00:08:39.139 sys 0m8.632s 00:08:39.139 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.139 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.139 ************************************ 00:08:39.139 END TEST nvmf_lvs_grow 00:08:39.139 ************************************ 00:08:39.139 18:09:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:39.139 18:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:39.139 18:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.139 18:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.139 ************************************ 00:08:39.139 START TEST nvmf_bdev_io_wait 00:08:39.139 ************************************ 00:08:39.139 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:39.139 * Looking for test storage... 00:08:39.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.139 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.139 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:39.139 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.139 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.139 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:39.140 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:41.045 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:41.045 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:41.045 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:41.045 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:41.045 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:41.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:08:41.046 00:08:41.046 --- 10.0.0.2 ping statistics --- 00:08:41.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.046 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:08:41.046 00:08:41.046 --- 10.0.0.1 ping statistics --- 00:08:41.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.046 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.046 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.046 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:41.046 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.046 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.046 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.046 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1371194 00:08:41.046 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:41.046 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1371194 00:08:41.046 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1371194 ']' 00:08:41.046 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.046 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.046 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.046 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.046 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.046 [2024-07-26 18:09:07.065005] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:41.046 [2024-07-26 18:09:07.065085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.046 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.046 [2024-07-26 18:09:07.102955] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.046 [2024-07-26 18:09:07.135217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.305 [2024-07-26 18:09:07.233159] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.305 [2024-07-26 18:09:07.233219] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.305 [2024-07-26 18:09:07.233235] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.305 [2024-07-26 18:09:07.233248] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.305 [2024-07-26 18:09:07.233259] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.305 [2024-07-26 18:09:07.233319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.305 [2024-07-26 18:09:07.233361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.305 [2024-07-26 18:09:07.233393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.305 [2024-07-26 18:09:07.233395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.305 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.305 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:41.305 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.305 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.306 [2024-07-26 18:09:07.375432] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.306 Malloc0 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.306 [2024-07-26 18:09:07.435177] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1371502 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1371505 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.306 { 00:08:41.306 "params": { 00:08:41.306 "name": "Nvme$subsystem", 00:08:41.306 "trtype": "$TEST_TRANSPORT", 00:08:41.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.306 "adrfam": "ipv4", 00:08:41.306 "trsvcid": "$NVMF_PORT", 00:08:41.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.306 "hdgst": ${hdgst:-false}, 00:08:41.306 "ddgst": ${ddgst:-false} 00:08:41.306 }, 00:08:41.306 "method": "bdev_nvme_attach_controller" 00:08:41.306 } 00:08:41.306 EOF 00:08:41.306 )") 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1371509 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.306 { 00:08:41.306 "params": { 00:08:41.306 "name": "Nvme$subsystem", 00:08:41.306 "trtype": "$TEST_TRANSPORT", 00:08:41.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.306 "adrfam": "ipv4", 00:08:41.306 "trsvcid": "$NVMF_PORT", 00:08:41.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.306 "hdgst": ${hdgst:-false}, 00:08:41.306 "ddgst": ${ddgst:-false} 00:08:41.306 }, 00:08:41.306 "method": "bdev_nvme_attach_controller" 00:08:41.306 } 00:08:41.306 EOF 00:08:41.306 )") 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1371514 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.306 { 00:08:41.306 "params": { 00:08:41.306 "name": "Nvme$subsystem", 00:08:41.306 "trtype": "$TEST_TRANSPORT", 00:08:41.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.306 "adrfam": "ipv4", 00:08:41.306 "trsvcid": "$NVMF_PORT", 00:08:41.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.306 "hdgst": ${hdgst:-false}, 00:08:41.306 "ddgst": ${ddgst:-false} 00:08:41.306 }, 00:08:41.306 "method": "bdev_nvme_attach_controller" 00:08:41.306 } 00:08:41.306 EOF 00:08:41.306 )") 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.306 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.306 { 00:08:41.306 "params": { 00:08:41.306 "name": "Nvme$subsystem", 00:08:41.306 "trtype": "$TEST_TRANSPORT", 00:08:41.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.306 "adrfam": "ipv4", 00:08:41.306 "trsvcid": "$NVMF_PORT", 00:08:41.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.307 "hdgst": ${hdgst:-false}, 00:08:41.307 "ddgst": ${ddgst:-false} 00:08:41.307 }, 00:08:41.307 "method": "bdev_nvme_attach_controller" 00:08:41.307 } 00:08:41.307 EOF 00:08:41.307 )") 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1371502 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.307 "params": { 00:08:41.307 "name": "Nvme1", 00:08:41.307 "trtype": "tcp", 00:08:41.307 "traddr": "10.0.0.2", 00:08:41.307 "adrfam": "ipv4", 00:08:41.307 "trsvcid": "4420", 00:08:41.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.307 "hdgst": false, 00:08:41.307 "ddgst": false 00:08:41.307 }, 00:08:41.307 "method": "bdev_nvme_attach_controller" 00:08:41.307 }' 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.307 "params": { 00:08:41.307 "name": "Nvme1", 00:08:41.307 "trtype": "tcp", 00:08:41.307 "traddr": "10.0.0.2", 00:08:41.307 "adrfam": "ipv4", 00:08:41.307 "trsvcid": "4420", 00:08:41.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.307 "hdgst": false, 00:08:41.307 "ddgst": false 00:08:41.307 }, 00:08:41.307 "method": "bdev_nvme_attach_controller" 00:08:41.307 }' 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.307 "params": { 00:08:41.307 "name": "Nvme1", 00:08:41.307 "trtype": "tcp", 00:08:41.307 "traddr": "10.0.0.2", 00:08:41.307 "adrfam": "ipv4", 00:08:41.307 "trsvcid": "4420", 00:08:41.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.307 "hdgst": false, 00:08:41.307 "ddgst": false 00:08:41.307 }, 00:08:41.307 "method": "bdev_nvme_attach_controller" 00:08:41.307 }' 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.307 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.307 "params": { 00:08:41.307 "name": "Nvme1", 00:08:41.307 "trtype": "tcp", 00:08:41.307 "traddr": "10.0.0.2", 00:08:41.307 "adrfam": "ipv4", 00:08:41.307 "trsvcid": "4420", 00:08:41.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.307 "hdgst": false, 00:08:41.307 "ddgst": false 00:08:41.307 }, 00:08:41.307 "method": "bdev_nvme_attach_controller" 00:08:41.307 }' 00:08:41.566 [2024-07-26 18:09:07.482437] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:41.566 [2024-07-26 18:09:07.482434] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:41.566 [2024-07-26 18:09:07.482430] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:41.566 [2024-07-26 18:09:07.482430] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:41.566 [2024-07-26 18:09:07.482518] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 18:09:07.482520] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 18:09:07.482519] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 18:09:07.482520] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:41.566 --proc-type=auto ] 00:08:41.566 --proc-type=auto ] 00:08:41.566 --proc-type=auto ] 00:08:41.566 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.566 [2024-07-26 18:09:07.624029] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.566 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.566 [2024-07-26 18:09:07.654170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.825 [2024-07-26 18:09:07.724816] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.825 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.825 [2024-07-26 18:09:07.730182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:41.825 [2024-07-26 18:09:07.755338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.825 [2024-07-26 18:09:07.827957] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.825 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.825 [2024-07-26 18:09:07.832576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:41.825 [2024-07-26 18:09:07.857747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.825 [2024-07-26 18:09:07.933150] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.825 [2024-07-26 18:09:07.935341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:41.825 [2024-07-26 18:09:07.963165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.083 [2024-07-26 18:09:08.038004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:42.083 Running I/O for 1 seconds... 00:08:42.083 Running I/O for 1 seconds... 00:08:42.341 Running I/O for 1 seconds... 00:08:42.341 Running I/O for 1 seconds... 00:08:43.276 00:08:43.276 Latency(us) 00:08:43.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.276 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:43.276 Nvme1n1 : 1.01 7205.58 28.15 0.00 0.00 17663.92 7281.78 37088.52 00:08:43.276 =================================================================================================================== 00:08:43.276 Total : 7205.58 28.15 0.00 0.00 17663.92 7281.78 37088.52 00:08:43.276 00:08:43.276 Latency(us) 00:08:43.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.276 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:43.276 Nvme1n1 : 1.02 7931.12 30.98 0.00 0.00 16000.31 6165.24 32039.82 00:08:43.276 =================================================================================================================== 00:08:43.276 Total : 7931.12 30.98 0.00 0.00 16000.31 6165.24 32039.82 00:08:43.276 00:08:43.276 Latency(us) 00:08:43.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.276 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:43.276 Nvme1n1 : 1.00 181710.04 709.80 0.00 0.00 701.64 295.82 940.56 00:08:43.276 =================================================================================================================== 00:08:43.276 Total : 181710.04 709.80 0.00 0.00 701.64 295.82 940.56 00:08:43.276 00:08:43.276 Latency(us) 00:08:43.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.276 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:43.276 Nvme1n1 : 1.01 7741.32 30.24 0.00 0.00 16472.18 7087.60 33593.27 00:08:43.276 =================================================================================================================== 00:08:43.276 Total : 7741.32 30.24 0.00 0.00 16472.18 7087.60 33593.27 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1371505 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1371509 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1371514 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.535 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.535 rmmod nvme_tcp 00:08:43.535 rmmod nvme_fabrics 00:08:43.793 rmmod nvme_keyring 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1371194 ']' 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1371194 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1371194 ']' 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1371194 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1371194 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1371194' 00:08:43.794 killing process with pid 1371194 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1371194 00:08:43.794 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1371194 00:08:44.054 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.054 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.054 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.054 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.054 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.054 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.054 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.054 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.990 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:45.990 00:08:45.990 real 0m7.189s 00:08:45.990 user 0m15.061s 00:08:45.990 sys 0m3.750s 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.990 ************************************ 00:08:45.990 END TEST nvmf_bdev_io_wait 00:08:45.990 ************************************ 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.990 ************************************ 00:08:45.990 START TEST nvmf_queue_depth 00:08:45.990 ************************************ 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:45.990 * Looking for test storage... 00:08:45.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.990 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:45.991 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.520 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.520 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.520 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.520 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.520 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.520 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.520 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.520 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.520 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.520 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:08:48.520 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:48.521 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:48.521 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:48.521 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:48.521 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:08:48.521 00:08:48.521 --- 10.0.0.2 ping statistics --- 00:08:48.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.521 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:08:48.521 00:08:48.521 --- 10.0.0.1 ping statistics --- 00:08:48.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.521 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.521 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1374064 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1374064 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1374064 ']' 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.522 [2024-07-26 18:09:14.319585] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:48.522 [2024-07-26 18:09:14.319672] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.522 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.522 [2024-07-26 18:09:14.357633] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.522 [2024-07-26 18:09:14.383498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.522 [2024-07-26 18:09:14.470055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.522 [2024-07-26 18:09:14.470127] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.522 [2024-07-26 18:09:14.470142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.522 [2024-07-26 18:09:14.470155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.522 [2024-07-26 18:09:14.470166] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.522 [2024-07-26 18:09:14.470193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.522 [2024-07-26 18:09:14.615015] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.522 Malloc0 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.522 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.780 [2024-07-26 18:09:14.675417] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1374084 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1374084 /var/tmp/bdevperf.sock 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1374084 ']' 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:48.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.780 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.780 [2024-07-26 18:09:14.725120] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:48.780 [2024-07-26 18:09:14.725195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1374084 ] 00:08:48.780 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.780 [2024-07-26 18:09:14.762525] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.780 [2024-07-26 18:09:14.792431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.780 [2024-07-26 18:09:14.883533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.038 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.038 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:49.038 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:49.038 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.038 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.038 NVMe0n1 00:08:49.038 18:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.038 18:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:49.303 Running I/O for 10 seconds... 00:08:59.307 00:08:59.307 Latency(us) 00:08:59.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.307 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:59.307 Verification LBA range: start 0x0 length 0x4000 00:08:59.307 NVMe0n1 : 10.10 8387.39 32.76 0.00 0.00 121498.57 24563.86 73788.68 00:08:59.307 =================================================================================================================== 00:08:59.307 Total : 8387.39 32.76 0.00 0.00 121498.57 24563.86 73788.68 00:08:59.307 0 00:08:59.307 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1374084 00:08:59.307 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1374084 ']' 00:08:59.307 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1374084 00:08:59.307 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:59.307 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.307 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1374084 00:08:59.307 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.307 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.307 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1374084' 00:08:59.307 killing process with pid 1374084 00:08:59.307 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1374084 00:08:59.307 Received shutdown signal, test time was about 10.000000 seconds 00:08:59.307 00:08:59.307 Latency(us) 00:08:59.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.307 =================================================================================================================== 00:08:59.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:59.307 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1374084 00:08:59.564 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:59.564 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:59.564 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.564 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:59.564 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.565 rmmod nvme_tcp 00:08:59.565 rmmod nvme_fabrics 00:08:59.565 rmmod nvme_keyring 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1374064 ']' 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1374064 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1374064 ']' 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1374064 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1374064 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1374064' 00:08:59.565 killing process with pid 1374064 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1374064 00:08:59.565 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1374064 00:08:59.822 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:59.822 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:59.822 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:59.822 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.822 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:59.822 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.823 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.823 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.352 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:02.352 00:09:02.352 real 0m15.920s 00:09:02.352 user 0m22.382s 00:09:02.352 sys 0m3.095s 00:09:02.352 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.352 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.352 ************************************ 00:09:02.352 END TEST nvmf_queue_depth 00:09:02.352 ************************************ 00:09:02.352 18:09:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:02.352 18:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:02.352 18:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.352 18:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:02.352 ************************************ 00:09:02.352 START TEST nvmf_target_multipath 00:09:02.352 ************************************ 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:02.352 * Looking for test storage... 00:09:02.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.352 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:02.353 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.255 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:04.256 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:04.256 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:04.256 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:04.256 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:04.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:09:04.256 00:09:04.256 --- 10.0.0.2 ping statistics --- 00:09:04.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.256 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:09:04.256 00:09:04.256 --- 10.0.0.1 ping statistics --- 00:09:04.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.256 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:04.256 only one NIC for nvmf test 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:04.256 rmmod nvme_tcp 00:09:04.256 rmmod nvme_fabrics 00:09:04.256 rmmod nvme_keyring 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.256 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:06.787 00:09:06.787 real 0m4.393s 00:09:06.787 user 0m0.823s 00:09:06.787 sys 0m1.563s 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:06.787 ************************************ 00:09:06.787 END TEST nvmf_target_multipath 00:09:06.787 ************************************ 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.787 ************************************ 00:09:06.787 START TEST nvmf_zcopy 00:09:06.787 ************************************ 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:06.787 * Looking for test storage... 00:09:06.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.787 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:06.788 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:08.685 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:08.685 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:08.685 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:08.685 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.685 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:09:08.686 00:09:08.686 --- 10.0.0.2 ping statistics --- 00:09:08.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.686 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:09:08.686 00:09:08.686 --- 10.0.0.1 ping statistics --- 00:09:08.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.686 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1379185 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1379185 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1379185 ']' 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.686 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.686 [2024-07-26 18:09:34.671227] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:08.686 [2024-07-26 18:09:34.671316] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.686 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.686 [2024-07-26 18:09:34.712455] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.686 [2024-07-26 18:09:34.741574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.943 [2024-07-26 18:09:34.832409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.943 [2024-07-26 18:09:34.832478] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.943 [2024-07-26 18:09:34.832500] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.943 [2024-07-26 18:09:34.832521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.943 [2024-07-26 18:09:34.832536] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.943 [2024-07-26 18:09:34.832585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.943 [2024-07-26 18:09:34.980618] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.943 [2024-07-26 18:09:34.996846] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.943 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.943 malloc0 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:08.943 { 00:09:08.943 "params": { 00:09:08.943 "name": "Nvme$subsystem", 00:09:08.943 "trtype": "$TEST_TRANSPORT", 00:09:08.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.943 "adrfam": "ipv4", 00:09:08.943 "trsvcid": "$NVMF_PORT", 00:09:08.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.943 "hdgst": ${hdgst:-false}, 00:09:08.943 "ddgst": ${ddgst:-false} 00:09:08.943 }, 00:09:08.943 "method": "bdev_nvme_attach_controller" 00:09:08.943 } 00:09:08.943 EOF 00:09:08.943 )") 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:08.943 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:08.943 "params": { 00:09:08.943 "name": "Nvme1", 00:09:08.943 "trtype": "tcp", 00:09:08.943 "traddr": "10.0.0.2", 00:09:08.943 "adrfam": "ipv4", 00:09:08.943 "trsvcid": "4420", 00:09:08.943 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.943 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.943 "hdgst": false, 00:09:08.943 "ddgst": false 00:09:08.943 }, 00:09:08.943 "method": "bdev_nvme_attach_controller" 00:09:08.943 }' 00:09:09.201 [2024-07-26 18:09:35.092151] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:09.201 [2024-07-26 18:09:35.092228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379291 ] 00:09:09.201 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.201 [2024-07-26 18:09:35.130162] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:09.201 [2024-07-26 18:09:35.161598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.201 [2024-07-26 18:09:35.255644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.459 Running I/O for 10 seconds... 00:09:21.655 00:09:21.656 Latency(us) 00:09:21.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.656 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:21.656 Verification LBA range: start 0x0 length 0x1000 00:09:21.656 Nvme1n1 : 10.02 5669.71 44.29 0.00 0.00 22514.14 4029.25 32816.55 00:09:21.656 =================================================================================================================== 00:09:21.656 Total : 5669.71 44.29 0.00 0.00 22514.14 4029.25 32816.55 00:09:21.656 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1380491 00:09:21.656 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:21.656 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.656 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:21.656 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:21.656 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:21.656 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:21.656 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:21.656 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:21.656 { 00:09:21.656 "params": { 00:09:21.656 "name": "Nvme$subsystem", 00:09:21.656 "trtype": "$TEST_TRANSPORT", 00:09:21.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:21.656 "adrfam": "ipv4", 00:09:21.656 "trsvcid": "$NVMF_PORT", 00:09:21.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:21.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:21.656 "hdgst": ${hdgst:-false}, 00:09:21.656 "ddgst": ${ddgst:-false} 00:09:21.656 }, 00:09:21.656 "method": "bdev_nvme_attach_controller" 00:09:21.656 } 00:09:21.656 EOF 00:09:21.656 )") 00:09:21.656 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:21.656 [2024-07-26 18:09:45.881418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.881465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:21.656 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:21.656 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:21.656 "params": { 00:09:21.656 "name": "Nvme1", 00:09:21.656 "trtype": "tcp", 00:09:21.656 "traddr": "10.0.0.2", 00:09:21.656 "adrfam": "ipv4", 00:09:21.656 "trsvcid": "4420", 00:09:21.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:21.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:21.656 "hdgst": false, 00:09:21.656 "ddgst": false 00:09:21.656 }, 00:09:21.656 "method": "bdev_nvme_attach_controller" 00:09:21.656 }' 00:09:21.656 [2024-07-26 18:09:45.889380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.889421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:45.897382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.897407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:45.905411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.905434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:45.913443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.913465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:45.917528] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:21.656 [2024-07-26 18:09:45.917600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380491 ] 00:09:21.656 [2024-07-26 18:09:45.921468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.921492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:45.929484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.929514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:45.937497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.937520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:45.945517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.945538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.656 [2024-07-26 18:09:45.951623] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:21.656 [2024-07-26 18:09:45.953562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.953590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:45.961580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.961608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:45.969599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.969626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:45.977622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.977649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:45.981718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.656 [2024-07-26 18:09:45.985652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.985681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:45.993696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:45.993735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:46.001689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:46.001717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:46.009711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:46.009738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:46.017732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:46.017760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:46.025753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:46.025790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:46.033783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:46.033814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.656 [2024-07-26 18:09:46.041821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.656 [2024-07-26 18:09:46.041859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.049821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.049848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.057844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.057871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.065866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.065906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.073886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.073926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.074030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.657 [2024-07-26 18:09:46.081910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.081937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.089956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.089992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.097982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.098021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.106005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.106045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.114027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.114075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.122047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.122095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.130080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.130130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.138073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.138112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.146137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.146172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.154154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.154189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.162167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.162201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.170161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.170185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.178182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.178207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.186214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.186239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.194229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.194254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.202249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.202273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.210274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.210299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.218294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.218319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.226316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.226357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.234359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.234385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.242384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.242416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.250405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.250432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.258430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.258458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.266450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.266479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.274462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.274489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.282486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.282513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.290510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.290537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.298533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.298560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.306559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.306589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.314581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.314607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.322603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.322629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.330626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.330652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.338646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.338672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.346671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.346698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.354693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.354720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.362722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.362752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 Running I/O for 5 seconds... 00:09:21.657 [2024-07-26 18:09:46.370742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.370770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.384312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.384342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.395448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.395480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.406998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.407030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.418012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.418041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.657 [2024-07-26 18:09:46.429541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.657 [2024-07-26 18:09:46.429570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.440974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.441005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.452568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.452599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.463567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.463595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.474802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.474834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.486445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.486476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.497883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.497914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.508832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.508860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.519983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.520011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.532576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.532607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.542519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.542550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.554404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.554436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.565813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.565841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.576690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.576718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.587730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.587761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.599200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.599229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.610623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.610650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.621922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.621950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.635339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.635366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.645794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.645824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.657515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.657544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.668828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.668856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.679623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.679651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.691117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.691148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.702560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.702591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.713805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.713836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.725455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.725483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.737255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.737283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.748731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.748762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.760137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.760165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.773494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.773522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.783926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.783957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.795946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.795982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.807399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.807431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.818583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.818614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.829826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.829857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.841245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.841276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.852813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.852844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.864162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.864190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.875322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.875351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.886979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.887007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.898555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.898583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.910078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.910106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.921291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.921319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.932461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.932489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.943738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.658 [2024-07-26 18:09:46.943766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.658 [2024-07-26 18:09:46.955441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:46.955469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:46.966515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:46.966546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:46.978237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:46.978265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:46.989190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:46.989218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.002223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.002251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.012470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.012510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.023749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.023780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.037127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.037155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.047401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.047432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.059365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.059393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.070734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.070766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.082617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.082645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.096115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.096143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.106896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.106929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.118195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.118223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.129605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.129634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.141327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.141359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.153051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.153092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.164586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.164618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.176122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.176151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.187390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.187419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.198977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.199009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.210599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.210627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.222216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.222247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.233414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.233451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.244947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.244976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.256700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.256729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.267840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.267882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.279399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.279427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.290777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.290805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.301960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.301988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.314021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.314049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.325315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.325344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.336783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.336814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.348188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.348216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.359415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.359443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.370958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.370986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.382386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.382418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.394001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.394029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.405038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.405084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.416599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.416627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.428220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.428248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.439902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.439932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.451080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.451115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.464396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.464428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.475337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.475376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-07-26 18:09:47.486490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-07-26 18:09:47.486533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.498129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.498157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.509363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.509391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.522589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.522620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.533044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.533087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.544938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.544969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.556500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.556532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.567949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.567979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.579436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.579467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.590561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.590589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.601920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.601951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.613172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.613200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.625009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.625037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.636074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.636103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.649314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.649341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.659146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.659174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.670302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.670331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.683570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.683601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.694513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.694544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.705778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.705805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.716755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.716783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.728012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.728040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.739148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.739189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.750363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.750391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.763396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.763426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.773243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.773274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.785095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.785123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.660 [2024-07-26 18:09:47.796691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.660 [2024-07-26 18:09:47.796719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.921 [2024-07-26 18:09:47.808470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.921 [2024-07-26 18:09:47.808499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.921 [2024-07-26 18:09:47.819655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.921 [2024-07-26 18:09:47.819686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.921 [2024-07-26 18:09:47.831032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.921 [2024-07-26 18:09:47.831069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.921 [2024-07-26 18:09:47.842633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.921 [2024-07-26 18:09:47.842665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.921 [2024-07-26 18:09:47.854323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.921 [2024-07-26 18:09:47.854354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.921 [2024-07-26 18:09:47.865580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.921 [2024-07-26 18:09:47.865608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.921 [2024-07-26 18:09:47.876616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.921 [2024-07-26 18:09:47.876645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.921 [2024-07-26 18:09:47.888018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.921 [2024-07-26 18:09:47.888046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.921 [2024-07-26 18:09:47.899572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.921 [2024-07-26 18:09:47.899603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.921 [2024-07-26 18:09:47.910666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:47.910697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.922 [2024-07-26 18:09:47.921984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:47.922015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.922 [2024-07-26 18:09:47.933331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:47.933359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.922 [2024-07-26 18:09:47.944742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:47.944773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.922 [2024-07-26 18:09:47.956398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:47.956429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.922 [2024-07-26 18:09:47.968033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:47.968075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.922 [2024-07-26 18:09:47.978917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:47.978948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.922 [2024-07-26 18:09:47.990342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:47.990370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.922 [2024-07-26 18:09:48.001319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:48.001361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.922 [2024-07-26 18:09:48.012879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:48.012911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.922 [2024-07-26 18:09:48.024398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:48.024429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.922 [2024-07-26 18:09:48.035964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:48.035995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.922 [2024-07-26 18:09:48.047412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:48.047440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.922 [2024-07-26 18:09:48.059284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.922 [2024-07-26 18:09:48.059312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.070808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.070839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.082301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.082332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.094143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.094171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.105426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.105454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.116437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.116465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.127901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.127928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.138978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.139006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.150178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.150206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.163377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.163405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.174019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.174047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.186004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.186032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.197727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.197757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.209161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.209190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.222556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.222588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.233260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.233288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.244265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.244294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.256006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.256037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.267432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.267460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.279005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.279037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.290691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.290723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.302090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.302122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.313395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.313423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.324877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.324906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.214 [2024-07-26 18:09:48.335366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.214 [2024-07-26 18:09:48.335394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.472 [2024-07-26 18:09:48.346114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.472 [2024-07-26 18:09:48.346151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.472 [2024-07-26 18:09:48.357218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.472 [2024-07-26 18:09:48.357247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.472 [2024-07-26 18:09:48.368329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.472 [2024-07-26 18:09:48.368357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.472 [2024-07-26 18:09:48.380105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.472 [2024-07-26 18:09:48.380139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.472 [2024-07-26 18:09:48.391500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.472 [2024-07-26 18:09:48.391528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.402969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.403012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.416396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.416425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.427612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.427643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.438913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.438944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.449953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.449981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.461142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.461170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.472166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.472194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.482981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.483009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.496007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.496035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.505884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.505916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.517602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.517633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.528939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.528979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.540809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.540837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.551776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.551804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.562712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.562741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.573827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.573855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.585281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.585312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.596556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.596583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.473 [2024-07-26 18:09:48.607395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.473 [2024-07-26 18:09:48.607423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.618602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.618629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.632164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.632196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.643463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.643494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.655108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.655136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.666325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.666359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.677249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.677276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.688466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.688494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.700204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.700233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.711976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.712004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.723373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.723404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.734876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.734907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.746470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.746506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.758206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.758234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.770158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.770187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.781583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.732 [2024-07-26 18:09:48.781614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.732 [2024-07-26 18:09:48.792886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.733 [2024-07-26 18:09:48.792917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.733 [2024-07-26 18:09:48.804420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.733 [2024-07-26 18:09:48.804451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.733 [2024-07-26 18:09:48.816092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.733 [2024-07-26 18:09:48.816119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.733 [2024-07-26 18:09:48.827115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.733 [2024-07-26 18:09:48.827143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.733 [2024-07-26 18:09:48.838578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.733 [2024-07-26 18:09:48.838610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.733 [2024-07-26 18:09:48.850322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.733 [2024-07-26 18:09:48.850350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.733 [2024-07-26 18:09:48.861476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.733 [2024-07-26 18:09:48.861506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.733 [2024-07-26 18:09:48.872603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.733 [2024-07-26 18:09:48.872634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.991 [2024-07-26 18:09:48.883772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.991 [2024-07-26 18:09:48.883800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.991 [2024-07-26 18:09:48.895326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.991 [2024-07-26 18:09:48.895354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.991 [2024-07-26 18:09:48.906889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.991 [2024-07-26 18:09:48.906921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.991 [2024-07-26 18:09:48.918379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.991 [2024-07-26 18:09:48.918407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.991 [2024-07-26 18:09:48.932037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.991 [2024-07-26 18:09:48.932079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.991 [2024-07-26 18:09:48.943618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.991 [2024-07-26 18:09:48.943649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.991 [2024-07-26 18:09:48.954698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.991 [2024-07-26 18:09:48.954729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.991 [2024-07-26 18:09:48.965996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.991 [2024-07-26 18:09:48.966032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:48.979224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:48.979252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:48.990275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:48.990303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:49.001885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:49.001916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:49.013624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:49.013655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:49.025216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:49.025245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:49.036753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:49.036784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:49.047879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:49.047923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:49.059354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:49.059385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:49.072410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:49.072438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:49.081971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:49.082002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:49.093776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:49.093807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:49.104866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:49.104897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:49.116566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:49.116597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.992 [2024-07-26 18:09:49.127570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.992 [2024-07-26 18:09:49.127601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.138819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.138849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.150391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.150419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.161989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.162021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.173680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.173711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.184764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.184803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.196517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.196549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.208225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.208257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.219690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.219721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.231373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.231400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.243450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.243493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.255160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.255190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.266915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.266947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.278133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.278160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.289479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.289510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.300715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.300745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.313775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.250 [2024-07-26 18:09:49.313805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.250 [2024-07-26 18:09:49.324169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.251 [2024-07-26 18:09:49.324197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.251 [2024-07-26 18:09:49.335298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.251 [2024-07-26 18:09:49.335326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.251 [2024-07-26 18:09:49.348514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.251 [2024-07-26 18:09:49.348547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.251 [2024-07-26 18:09:49.359406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.251 [2024-07-26 18:09:49.359450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.251 [2024-07-26 18:09:49.371121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.251 [2024-07-26 18:09:49.371150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.251 [2024-07-26 18:09:49.382585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.251 [2024-07-26 18:09:49.382617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.251 [2024-07-26 18:09:49.393891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.251 [2024-07-26 18:09:49.393923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.405520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.405552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.417279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.417309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.428809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.428842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.440042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.440080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.451687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.451719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.463126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.463155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.474373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.474402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.486017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.486045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.499259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.499287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.509963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.509994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.521681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.521712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.533541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.533573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.546811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.546842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.557446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.557477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.569693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.569724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.580885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.580916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.592096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.592124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.603308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.603335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.615159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.615186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.626511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.626542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.637659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.637690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.510 [2024-07-26 18:09:49.648836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.510 [2024-07-26 18:09:49.648867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.662112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.662140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.672530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.672561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.684308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.684336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.695846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.695877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.706936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.706981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.718207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.718236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.729589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.729620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.741089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.741117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.752386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.752414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.763893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.763924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.775423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.775454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.786498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.786529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.798020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.798049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.809517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.809550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.821155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.821184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.832749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.832780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.844757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.844788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.856276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.856303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.867843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.867873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.879373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.879402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.893250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.893278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.769 [2024-07-26 18:09:49.903976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.769 [2024-07-26 18:09:49.904020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.028 [2024-07-26 18:09:49.915460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.028 [2024-07-26 18:09:49.915491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:49.926899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:49.926930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:49.938339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:49.938366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:49.949544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:49.949571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:49.961123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:49.961151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:49.972403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:49.972434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:49.983710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:49.983738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:49.994941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:49.994969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.007381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.007440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.018320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.018353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.029641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.029670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.040954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.040983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.052782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.052810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.065539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.065570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.076039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.076076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.087004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.087031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.098549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.098577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.110039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.110076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.121651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.121682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.133336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.133363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.144600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.144627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.155609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.155636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.029 [2024-07-26 18:09:50.166494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.029 [2024-07-26 18:09:50.166521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.179587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.179617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.190464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.190494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.201724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.201755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.213166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.213193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.224325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.224356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.235660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.235690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.246999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.247027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.258479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.258510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.271622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.271661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.282213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.282250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.293757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.293788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.305234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.305261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.316679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.316709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.328103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.328130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.339266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.339293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.350590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.350636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.362179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.362207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.373623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.373653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.385142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.385169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.396378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.396405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.287 [2024-07-26 18:09:50.407679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.287 [2024-07-26 18:09:50.407709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.288 [2024-07-26 18:09:50.418902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.288 [2024-07-26 18:09:50.418932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.288 [2024-07-26 18:09:50.430331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.288 [2024-07-26 18:09:50.430361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.441949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.441979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.453822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.453853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.465050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.465091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.476567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.476596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.488095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.488130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.499921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.499952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.511624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.511655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.522995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.523041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.534203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.534231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.545725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.545756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.557150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.557178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.568567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.568597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.580223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.580254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.591333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.591361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.602842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.602872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.614157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.614185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.625028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.625056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.546 [2024-07-26 18:09:50.636365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.546 [2024-07-26 18:09:50.636396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.547 [2024-07-26 18:09:50.647858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.547 [2024-07-26 18:09:50.647888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.547 [2024-07-26 18:09:50.659092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.547 [2024-07-26 18:09:50.659124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.547 [2024-07-26 18:09:50.670388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.547 [2024-07-26 18:09:50.670418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.547 [2024-07-26 18:09:50.681559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.547 [2024-07-26 18:09:50.681590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.692837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.692867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.703882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.703917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.715299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.715329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.728488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.728518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.739387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.739417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.750770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.750799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.761889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.761921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.773215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.773243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.786863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.786893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.797201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.797228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.807935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.807962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.819213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.819240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.830575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.830605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.842185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.842213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.862773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.862806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.874081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.874108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.885102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.885129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.896134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.896161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.907621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.907651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.919227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.919258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.805 [2024-07-26 18:09:50.930932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.805 [2024-07-26 18:09:50.930974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.806 [2024-07-26 18:09:50.942662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.806 [2024-07-26 18:09:50.942693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:50.953782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:50.953812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:50.965620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:50.965650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:50.977414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:50.977445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:50.989013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:50.989040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.000377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.000407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.011512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.011543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.023262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.023293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.034726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.034756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.046238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.046266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.057597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.057628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.069205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.069232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.080661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.080690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.092505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.092548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.103560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.103588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.115183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.115211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.126824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.126854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.138531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.138561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.150089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.150117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.161609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.161639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.064 [2024-07-26 18:09:51.173475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.064 [2024-07-26 18:09:51.173506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.065 [2024-07-26 18:09:51.184553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.065 [2024-07-26 18:09:51.184584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.065 [2024-07-26 18:09:51.195627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.065 [2024-07-26 18:09:51.195659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.065 [2024-07-26 18:09:51.206765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.065 [2024-07-26 18:09:51.206795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.218130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.218158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.229195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.229222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.240779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.240810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.252468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.252499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.263897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.263924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.275319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.275347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.286733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.286775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.298627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.298659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.311760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.311791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.322000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.322030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.334616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.334647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.345971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.346001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.357519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.357546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.369135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.369163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.380603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.380634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.391017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.391043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 00:09:25.323 Latency(us) 00:09:25.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.323 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:25.323 Nvme1n1 : 5.01 11151.26 87.12 0.00 0.00 11462.52 5267.15 22330.79 00:09:25.323 =================================================================================================================== 00:09:25.323 Total : 11151.26 87.12 0.00 0.00 11462.52 5267.15 22330.79 00:09:25.323 [2024-07-26 18:09:51.396817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.396845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.404827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.404867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.323 [2024-07-26 18:09:51.412873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.323 [2024-07-26 18:09:51.412907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.324 [2024-07-26 18:09:51.420917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.324 [2024-07-26 18:09:51.420964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.324 [2024-07-26 18:09:51.428931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.324 [2024-07-26 18:09:51.428974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.324 [2024-07-26 18:09:51.436960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.324 [2024-07-26 18:09:51.437004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.324 [2024-07-26 18:09:51.444979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.324 [2024-07-26 18:09:51.445025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.324 [2024-07-26 18:09:51.453006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.324 [2024-07-26 18:09:51.453052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.324 [2024-07-26 18:09:51.461033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.324 [2024-07-26 18:09:51.461104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.469053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.469120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.477104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.477156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.485122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.485173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.493150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.493196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.501169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.501218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.509188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.509235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.517195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.517241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.525214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.525261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.533218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.533258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.541220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.541244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.549238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.549265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.557303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.557348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.565313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.565355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.573327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.573383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.581322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.581362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.589388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.589432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.597401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.597442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.605425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.605468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.613428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.613452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.621439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.621463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 [2024-07-26 18:09:51.629463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.582 [2024-07-26 18:09:51.629488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1380491) - No such process 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1380491 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.582 delay0 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.582 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:25.582 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.582 [2024-07-26 18:09:51.707724] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:32.136 Initializing NVMe Controllers 00:09:32.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:32.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:32.136 Initialization complete. Launching workers. 00:09:32.136 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1307 00:09:32.136 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1594, failed to submit 33 00:09:32.136 success 1422, unsuccess 172, failed 0 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.136 rmmod nvme_tcp 00:09:32.136 rmmod nvme_fabrics 00:09:32.136 rmmod nvme_keyring 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1379185 ']' 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1379185 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1379185 ']' 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1379185 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1379185 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1379185' 00:09:32.136 killing process with pid 1379185 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1379185 00:09:32.136 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1379185 00:09:32.395 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:32.395 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:32.395 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:32.395 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:32.395 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:32.395 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.395 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.395 18:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.297 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:34.297 00:09:34.297 real 0m27.964s 00:09:34.297 user 0m41.262s 00:09:34.297 sys 0m8.469s 00:09:34.297 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.297 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.297 ************************************ 00:09:34.297 END TEST nvmf_zcopy 00:09:34.297 ************************************ 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.556 ************************************ 00:09:34.556 START TEST nvmf_nmic 00:09:34.556 ************************************ 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:34.556 * Looking for test storage... 00:09:34.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:34.556 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:36.459 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:36.459 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:36.460 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:36.460 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:36.460 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:36.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:09:36.460 00:09:36.460 --- 10.0.0.2 ping statistics --- 00:09:36.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.460 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:09:36.460 00:09:36.460 --- 10.0.0.1 ping statistics --- 00:09:36.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.460 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1383889 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1383889 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1383889 ']' 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.460 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.718 [2024-07-26 18:10:02.624449] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:36.718 [2024-07-26 18:10:02.624518] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.718 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.718 [2024-07-26 18:10:02.663020] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:36.719 [2024-07-26 18:10:02.690320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.719 [2024-07-26 18:10:02.782993] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.719 [2024-07-26 18:10:02.783054] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.719 [2024-07-26 18:10:02.783094] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.719 [2024-07-26 18:10:02.783107] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.719 [2024-07-26 18:10:02.783116] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.719 [2024-07-26 18:10:02.783182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.719 [2024-07-26 18:10:02.783238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.719 [2024-07-26 18:10:02.783269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.719 [2024-07-26 18:10:02.783271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.977 [2024-07-26 18:10:02.936236] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.977 Malloc0 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.977 [2024-07-26 18:10:02.987404] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:36.977 test case1: single bdev can't be used in multiple subsystems 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.977 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.977 [2024-07-26 18:10:03.011228] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:36.977 [2024-07-26 18:10:03.011265] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:36.977 [2024-07-26 18:10:03.011281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.977 request: 00:09:36.977 { 00:09:36.977 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:36.977 "namespace": { 00:09:36.977 "bdev_name": "Malloc0", 00:09:36.977 "no_auto_visible": false 00:09:36.977 }, 00:09:36.977 "method": "nvmf_subsystem_add_ns", 00:09:36.977 "req_id": 1 00:09:36.977 } 00:09:36.977 Got JSON-RPC error response 00:09:36.977 response: 00:09:36.977 { 00:09:36.977 "code": -32602, 00:09:36.977 "message": "Invalid parameters" 00:09:36.977 } 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:36.977 Adding namespace failed - expected result. 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:36.977 test case2: host connect to nvmf target in multiple paths 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.977 [2024-07-26 18:10:03.019375] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.977 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:37.909 18:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:38.472 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:38.472 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:38.472 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.472 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:38.472 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:40.371 18:10:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:40.371 18:10:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:40.371 18:10:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.371 18:10:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:40.371 18:10:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.371 18:10:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:40.371 18:10:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:40.371 [global] 00:09:40.371 thread=1 00:09:40.371 invalidate=1 00:09:40.371 rw=write 00:09:40.371 time_based=1 00:09:40.371 runtime=1 00:09:40.371 ioengine=libaio 00:09:40.371 direct=1 00:09:40.371 bs=4096 00:09:40.371 iodepth=1 00:09:40.371 norandommap=0 00:09:40.371 numjobs=1 00:09:40.371 00:09:40.371 verify_dump=1 00:09:40.371 verify_backlog=512 00:09:40.371 verify_state_save=0 00:09:40.371 do_verify=1 00:09:40.371 verify=crc32c-intel 00:09:40.371 [job0] 00:09:40.371 filename=/dev/nvme0n1 00:09:40.371 Could not set queue depth (nvme0n1) 00:09:40.629 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.629 fio-3.35 00:09:40.629 Starting 1 thread 00:09:41.568 00:09:41.568 job0: (groupid=0, jobs=1): err= 0: pid=1384527: Fri Jul 26 18:10:07 2024 00:09:41.568 read: IOPS=20, BW=81.6KiB/s (83.5kB/s)(84.0KiB/1030msec) 00:09:41.568 slat (nsec): min=8897, max=36834, avg=25203.29, stdev=9986.19 00:09:41.568 clat (usec): min=40762, max=41403, avg=40978.11, stdev=113.18 00:09:41.568 lat (usec): min=40771, max=41421, avg=41003.31, stdev=112.49 00:09:41.568 clat percentiles (usec): 00:09:41.568 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:41.568 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:41.568 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:41.568 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:41.568 | 99.99th=[41157] 00:09:41.568 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:09:41.568 slat (usec): min=9, max=28781, avg=72.46, stdev=1271.27 00:09:41.568 clat (usec): min=182, max=3542, avg=252.60, stdev=201.79 00:09:41.568 lat (usec): min=192, max=29133, avg=325.06, stdev=1291.63 00:09:41.568 clat percentiles (usec): 00:09:41.568 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 204], 00:09:41.568 | 30.00th=[ 212], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 243], 00:09:41.568 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 293], 95.00th=[ 310], 00:09:41.568 | 99.00th=[ 355], 99.50th=[ 1860], 99.90th=[ 3556], 99.95th=[ 3556], 00:09:41.568 | 99.99th=[ 3556] 00:09:41.568 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:41.568 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:41.568 lat (usec) : 250=65.48%, 500=30.02% 00:09:41.568 lat (msec) : 2=0.19%, 4=0.38%, 50=3.94% 00:09:41.568 cpu : usr=1.07%, sys=0.58%, ctx=536, majf=0, minf=2 00:09:41.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.568 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.568 00:09:41.568 Run status group 0 (all jobs): 00:09:41.568 READ: bw=81.6KiB/s (83.5kB/s), 81.6KiB/s-81.6KiB/s (83.5kB/s-83.5kB/s), io=84.0KiB (86.0kB), run=1030-1030msec 00:09:41.568 WRITE: bw=1988KiB/s (2036kB/s), 1988KiB/s-1988KiB/s (2036kB/s-2036kB/s), io=2048KiB (2097kB), run=1030-1030msec 00:09:41.568 00:09:41.568 Disk stats (read/write): 00:09:41.568 nvme0n1: ios=43/512, merge=0/0, ticks=1682/122, in_queue=1804, util=98.60% 00:09:41.568 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:41.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:41.838 rmmod nvme_tcp 00:09:41.838 rmmod nvme_fabrics 00:09:41.838 rmmod nvme_keyring 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1383889 ']' 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1383889 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1383889 ']' 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1383889 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1383889 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1383889' 00:09:41.838 killing process with pid 1383889 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1383889 00:09:41.838 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1383889 00:09:42.135 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.135 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:42.135 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:42.135 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:42.135 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:42.135 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.135 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.135 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.045 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.045 00:09:44.045 real 0m9.708s 00:09:44.045 user 0m22.252s 00:09:44.045 sys 0m2.202s 00:09:44.045 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.045 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.045 ************************************ 00:09:44.045 END TEST nvmf_nmic 00:09:44.045 ************************************ 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.304 ************************************ 00:09:44.304 START TEST nvmf_fio_target 00:09:44.304 ************************************ 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:44.304 * Looking for test storage... 00:09:44.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.304 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:09:44.305 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.209 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.209 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:09:46.209 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:46.209 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:46.209 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:46.209 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:46.210 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:46.210 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:46.210 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:46.210 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:46.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:09:46.210 00:09:46.210 --- 10.0.0.2 ping statistics --- 00:09:46.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.210 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:09:46.210 00:09:46.210 --- 10.0.0.1 ping statistics --- 00:09:46.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.210 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.210 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:46.211 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:46.211 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:46.211 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:46.211 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:46.211 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.211 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1386602 00:09:46.211 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.471 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1386602 00:09:46.471 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1386602 ']' 00:09:46.471 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.471 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.471 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.471 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.471 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.471 [2024-07-26 18:10:12.401157] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:46.471 [2024-07-26 18:10:12.401235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.471 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.471 [2024-07-26 18:10:12.438742] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:46.471 [2024-07-26 18:10:12.470682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.471 [2024-07-26 18:10:12.563020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.471 [2024-07-26 18:10:12.563091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.471 [2024-07-26 18:10:12.563122] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.471 [2024-07-26 18:10:12.563136] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.471 [2024-07-26 18:10:12.563149] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.471 [2024-07-26 18:10:12.563205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.471 [2024-07-26 18:10:12.563274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.471 [2024-07-26 18:10:12.563323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.471 [2024-07-26 18:10:12.563325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.730 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.730 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:46.730 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:46.730 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:46.730 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.730 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.730 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:46.988 [2024-07-26 18:10:12.976440] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.988 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.251 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:47.251 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.509 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:47.509 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.766 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:47.766 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.022 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:48.023 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:48.279 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.536 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:48.536 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.793 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:48.793 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.051 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:49.051 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:49.308 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:49.566 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:49.566 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:49.823 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:49.824 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:50.081 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.339 [2024-07-26 18:10:16.336432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.339 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:50.597 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:50.856 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:51.423 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:51.423 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:51.423 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:51.423 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:51.423 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:51.423 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:53.959 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:53.959 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:53.959 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:53.959 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:53.959 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:53.959 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:53.959 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:53.959 [global] 00:09:53.959 thread=1 00:09:53.959 invalidate=1 00:09:53.959 rw=write 00:09:53.959 time_based=1 00:09:53.959 runtime=1 00:09:53.959 ioengine=libaio 00:09:53.959 direct=1 00:09:53.959 bs=4096 00:09:53.959 iodepth=1 00:09:53.959 norandommap=0 00:09:53.959 numjobs=1 00:09:53.959 00:09:53.959 verify_dump=1 00:09:53.959 verify_backlog=512 00:09:53.959 verify_state_save=0 00:09:53.959 do_verify=1 00:09:53.959 verify=crc32c-intel 00:09:53.959 [job0] 00:09:53.959 filename=/dev/nvme0n1 00:09:53.959 [job1] 00:09:53.959 filename=/dev/nvme0n2 00:09:53.959 [job2] 00:09:53.959 filename=/dev/nvme0n3 00:09:53.959 [job3] 00:09:53.959 filename=/dev/nvme0n4 00:09:53.959 Could not set queue depth (nvme0n1) 00:09:53.959 Could not set queue depth (nvme0n2) 00:09:53.959 Could not set queue depth (nvme0n3) 00:09:53.959 Could not set queue depth (nvme0n4) 00:09:53.959 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.959 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.959 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.959 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.959 fio-3.35 00:09:53.959 Starting 4 threads 00:09:54.893 00:09:54.893 job0: (groupid=0, jobs=1): err= 0: pid=1387563: Fri Jul 26 18:10:21 2024 00:09:54.893 read: IOPS=40, BW=163KiB/s (167kB/s)(164KiB/1005msec) 00:09:54.893 slat (nsec): min=8103, max=67188, avg=20100.98, stdev=12261.75 00:09:54.893 clat (usec): min=404, max=42072, avg=20887.95, stdev=20536.13 00:09:54.893 lat (usec): min=419, max=42084, avg=20908.05, stdev=20534.60 00:09:54.893 clat percentiles (usec): 00:09:54.893 | 1.00th=[ 404], 5.00th=[ 469], 10.00th=[ 537], 20.00th=[ 627], 00:09:54.893 | 30.00th=[ 668], 40.00th=[ 717], 50.00th=[11863], 60.00th=[41157], 00:09:54.893 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:54.893 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:54.893 | 99.99th=[42206] 00:09:54.893 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:09:54.893 slat (nsec): min=6472, max=48891, avg=11476.54, stdev=5635.11 00:09:54.893 clat (usec): min=189, max=2300, avg=274.01, stdev=131.25 00:09:54.893 lat (usec): min=197, max=2316, avg=285.49, stdev=133.61 00:09:54.893 clat percentiles (usec): 00:09:54.893 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:09:54.893 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:09:54.893 | 70.00th=[ 243], 80.00th=[ 334], 90.00th=[ 449], 95.00th=[ 490], 00:09:54.893 | 99.00th=[ 562], 99.50th=[ 570], 99.90th=[ 2311], 99.95th=[ 2311], 00:09:54.893 | 99.99th=[ 2311] 00:09:54.893 bw ( KiB/s): min= 4104, max= 4104, per=24.93%, avg=4104.00, stdev= 0.00, samples=1 00:09:54.893 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:09:54.893 lat (usec) : 250=69.44%, 500=19.71%, 750=6.51%, 1000=0.36% 00:09:54.893 lat (msec) : 4=0.18%, 20=0.18%, 50=3.62% 00:09:54.893 cpu : usr=0.20%, sys=0.80%, ctx=555, majf=0, minf=1 00:09:54.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.893 issued rwts: total=41,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.893 job1: (groupid=0, jobs=1): err= 0: pid=1387566: Fri Jul 26 18:10:21 2024 00:09:54.893 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:54.893 slat (nsec): min=5997, max=59965, avg=13880.68, stdev=6100.97 00:09:54.893 clat (usec): min=284, max=1338, avg=341.37, stdev=42.82 00:09:54.893 lat (usec): min=291, max=1356, avg=355.25, stdev=44.15 00:09:54.893 clat percentiles (usec): 00:09:54.893 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 326], 00:09:54.893 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:09:54.893 | 70.00th=[ 347], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 367], 00:09:54.893 | 99.00th=[ 486], 99.50th=[ 502], 99.90th=[ 1188], 99.95th=[ 1336], 00:09:54.893 | 99.99th=[ 1336] 00:09:54.893 write: IOPS=1574, BW=6298KiB/s (6449kB/s)(6304KiB/1001msec); 0 zone resets 00:09:54.893 slat (nsec): min=7758, max=55569, avg=17300.40, stdev=7474.50 00:09:54.893 clat (usec): min=180, max=1684, avg=261.85, stdev=86.69 00:09:54.893 lat (usec): min=188, max=1709, avg=279.15, stdev=87.59 00:09:54.893 clat percentiles (usec): 00:09:54.893 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 217], 00:09:54.893 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:09:54.893 | 70.00th=[ 251], 80.00th=[ 277], 90.00th=[ 388], 95.00th=[ 437], 00:09:54.893 | 99.00th=[ 529], 99.50th=[ 644], 99.90th=[ 889], 99.95th=[ 1680], 00:09:54.893 | 99.99th=[ 1680] 00:09:54.893 bw ( KiB/s): min= 8208, max= 8208, per=49.86%, avg=8208.00, stdev= 0.00, samples=1 00:09:54.893 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:09:54.893 lat (usec) : 250=35.31%, 500=63.75%, 750=0.71%, 1000=0.13% 00:09:54.893 lat (msec) : 2=0.10% 00:09:54.893 cpu : usr=4.00%, sys=6.10%, ctx=3113, majf=0, minf=1 00:09:54.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.893 issued rwts: total=1536,1576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.893 job2: (groupid=0, jobs=1): err= 0: pid=1387567: Fri Jul 26 18:10:21 2024 00:09:54.893 read: IOPS=1422, BW=5690KiB/s (5827kB/s)(5696KiB/1001msec) 00:09:54.893 slat (nsec): min=5831, max=43770, avg=13653.13, stdev=6132.68 00:09:54.893 clat (usec): min=346, max=2959, avg=402.99, stdev=74.64 00:09:54.893 lat (usec): min=353, max=2965, avg=416.64, stdev=75.17 00:09:54.893 clat percentiles (usec): 00:09:54.893 | 1.00th=[ 359], 5.00th=[ 371], 10.00th=[ 379], 20.00th=[ 388], 00:09:54.893 | 30.00th=[ 392], 40.00th=[ 396], 50.00th=[ 400], 60.00th=[ 404], 00:09:54.893 | 70.00th=[ 408], 80.00th=[ 416], 90.00th=[ 424], 95.00th=[ 433], 00:09:54.893 | 99.00th=[ 486], 99.50th=[ 498], 99.90th=[ 1237], 99.95th=[ 2966], 00:09:54.893 | 99.99th=[ 2966] 00:09:54.893 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:54.893 slat (nsec): min=7496, max=56499, avg=15508.54, stdev=6892.92 00:09:54.893 clat (usec): min=194, max=1357, avg=240.73, stdev=42.60 00:09:54.893 lat (usec): min=204, max=1382, avg=256.24, stdev=43.46 00:09:54.893 clat percentiles (usec): 00:09:54.893 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 227], 00:09:54.893 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 241], 00:09:54.893 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 269], 00:09:54.893 | 99.00th=[ 322], 99.50th=[ 383], 99.90th=[ 996], 99.95th=[ 1352], 00:09:54.893 | 99.99th=[ 1352] 00:09:54.893 bw ( KiB/s): min= 8192, max= 8192, per=49.76%, avg=8192.00, stdev= 0.00, samples=1 00:09:54.893 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:54.893 lat (usec) : 250=40.84%, 500=58.85%, 750=0.14%, 1000=0.07% 00:09:54.893 lat (msec) : 2=0.07%, 4=0.03% 00:09:54.893 cpu : usr=3.70%, sys=5.60%, ctx=2963, majf=0, minf=1 00:09:54.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.893 issued rwts: total=1424,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.893 job3: (groupid=0, jobs=1): err= 0: pid=1387568: Fri Jul 26 18:10:21 2024 00:09:54.893 read: IOPS=23, BW=95.9KiB/s (98.2kB/s)(96.0KiB/1001msec) 00:09:54.893 slat (nsec): min=11306, max=39987, avg=17844.46, stdev=8966.15 00:09:54.893 clat (usec): min=450, max=42283, avg=35153.70, stdev=15047.36 00:09:54.893 lat (usec): min=464, max=42316, avg=35171.55, stdev=15049.34 00:09:54.893 clat percentiles (usec): 00:09:54.893 | 1.00th=[ 453], 5.00th=[ 545], 10.00th=[ 685], 20.00th=[40633], 00:09:54.893 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:54.893 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:54.894 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:54.894 | 99.99th=[42206] 00:09:54.894 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:54.894 slat (nsec): min=8568, max=69159, avg=15096.68, stdev=6883.10 00:09:54.894 clat (usec): min=206, max=1044, avg=287.69, stdev=87.13 00:09:54.894 lat (usec): min=216, max=1059, avg=302.79, stdev=88.63 00:09:54.894 clat percentiles (usec): 00:09:54.894 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 229], 00:09:54.894 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 258], 00:09:54.894 | 70.00th=[ 289], 80.00th=[ 375], 90.00th=[ 420], 95.00th=[ 465], 00:09:54.894 | 99.00th=[ 490], 99.50th=[ 529], 99.90th=[ 1045], 99.95th=[ 1045], 00:09:54.894 | 99.99th=[ 1045] 00:09:54.894 bw ( KiB/s): min= 4096, max= 4096, per=24.88%, avg=4096.00, stdev= 0.00, samples=1 00:09:54.894 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:54.894 lat (usec) : 250=51.68%, 500=43.28%, 750=0.93% 00:09:54.894 lat (msec) : 2=0.19%, 10=0.19%, 50=3.73% 00:09:54.894 cpu : usr=0.40%, sys=1.00%, ctx=538, majf=0, minf=2 00:09:54.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.894 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.894 00:09:54.894 Run status group 0 (all jobs): 00:09:54.894 READ: bw=11.8MiB/s (12.3MB/s), 95.9KiB/s-6138KiB/s (98.2kB/s-6285kB/s), io=11.8MiB (12.4MB), run=1001-1005msec 00:09:54.894 WRITE: bw=16.1MiB/s (16.9MB/s), 2038KiB/s-6298KiB/s (2087kB/s-6449kB/s), io=16.2MiB (16.9MB), run=1001-1005msec 00:09:54.894 00:09:54.894 Disk stats (read/write): 00:09:54.894 nvme0n1: ios=90/512, merge=0/0, ticks=719/135, in_queue=854, util=86.97% 00:09:54.894 nvme0n2: ios=1170/1536, merge=0/0, ticks=548/373, in_queue=921, util=89.32% 00:09:54.894 nvme0n3: ios=1115/1536, merge=0/0, ticks=495/343, in_queue=838, util=95.39% 00:09:54.894 nvme0n4: ios=67/512, merge=0/0, ticks=747/139, in_queue=886, util=95.78% 00:09:54.894 18:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:54.894 [global] 00:09:54.894 thread=1 00:09:54.894 invalidate=1 00:09:54.894 rw=randwrite 00:09:54.894 time_based=1 00:09:54.894 runtime=1 00:09:54.894 ioengine=libaio 00:09:54.894 direct=1 00:09:54.894 bs=4096 00:09:54.894 iodepth=1 00:09:54.894 norandommap=0 00:09:54.894 numjobs=1 00:09:54.894 00:09:54.894 verify_dump=1 00:09:54.894 verify_backlog=512 00:09:54.894 verify_state_save=0 00:09:54.894 do_verify=1 00:09:54.894 verify=crc32c-intel 00:09:54.894 [job0] 00:09:54.894 filename=/dev/nvme0n1 00:09:54.894 [job1] 00:09:54.894 filename=/dev/nvme0n2 00:09:54.894 [job2] 00:09:54.894 filename=/dev/nvme0n3 00:09:54.894 [job3] 00:09:54.894 filename=/dev/nvme0n4 00:09:55.152 Could not set queue depth (nvme0n1) 00:09:55.152 Could not set queue depth (nvme0n2) 00:09:55.152 Could not set queue depth (nvme0n3) 00:09:55.152 Could not set queue depth (nvme0n4) 00:09:55.152 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.152 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.152 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.152 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.152 fio-3.35 00:09:55.152 Starting 4 threads 00:09:56.536 00:09:56.536 job0: (groupid=0, jobs=1): err= 0: pid=1387912: Fri Jul 26 18:10:22 2024 00:09:56.536 read: IOPS=988, BW=3952KiB/s (4047kB/s)(3956KiB/1001msec) 00:09:56.536 slat (nsec): min=6480, max=40684, avg=15428.67, stdev=5517.82 00:09:56.536 clat (usec): min=324, max=41923, avg=673.26, stdev=2929.53 00:09:56.536 lat (usec): min=332, max=41931, avg=688.69, stdev=2929.46 00:09:56.536 clat percentiles (usec): 00:09:56.536 | 1.00th=[ 334], 5.00th=[ 355], 10.00th=[ 388], 20.00th=[ 416], 00:09:56.536 | 30.00th=[ 429], 40.00th=[ 441], 50.00th=[ 457], 60.00th=[ 478], 00:09:56.536 | 70.00th=[ 486], 80.00th=[ 494], 90.00th=[ 506], 95.00th=[ 523], 00:09:56.536 | 99.00th=[ 570], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:09:56.536 | 99.99th=[41681] 00:09:56.536 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:56.536 slat (nsec): min=7579, max=72439, avg=19878.94, stdev=8560.76 00:09:56.536 clat (usec): min=195, max=538, avg=281.97, stdev=66.45 00:09:56.536 lat (usec): min=204, max=559, avg=301.85, stdev=67.49 00:09:56.536 clat percentiles (usec): 00:09:56.536 | 1.00th=[ 208], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 235], 00:09:56.536 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 260], 60.00th=[ 273], 00:09:56.536 | 70.00th=[ 289], 80.00th=[ 314], 90.00th=[ 392], 95.00th=[ 441], 00:09:56.536 | 99.00th=[ 506], 99.50th=[ 515], 99.90th=[ 537], 99.95th=[ 537], 00:09:56.536 | 99.99th=[ 537] 00:09:56.536 bw ( KiB/s): min= 6184, max= 6184, per=34.74%, avg=6184.00, stdev= 0.00, samples=1 00:09:56.536 iops : min= 1546, max= 1546, avg=1546.00, stdev= 0.00, samples=1 00:09:56.536 lat (usec) : 250=21.16%, 500=70.69%, 750=7.85% 00:09:56.536 lat (msec) : 20=0.05%, 50=0.25% 00:09:56.536 cpu : usr=2.80%, sys=4.70%, ctx=2016, majf=0, minf=2 00:09:56.536 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.536 issued rwts: total=989,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.536 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.536 job1: (groupid=0, jobs=1): err= 0: pid=1387913: Fri Jul 26 18:10:22 2024 00:09:56.536 read: IOPS=1020, BW=4084KiB/s (4182kB/s)(4096KiB/1003msec) 00:09:56.536 slat (nsec): min=6152, max=48872, avg=15184.95, stdev=9000.36 00:09:56.536 clat (usec): min=295, max=41553, avg=492.17, stdev=1290.07 00:09:56.536 lat (usec): min=303, max=41575, avg=507.36, stdev=1290.71 00:09:56.536 clat percentiles (usec): 00:09:56.536 | 1.00th=[ 314], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 330], 00:09:56.536 | 30.00th=[ 351], 40.00th=[ 416], 50.00th=[ 457], 60.00th=[ 478], 00:09:56.536 | 70.00th=[ 498], 80.00th=[ 529], 90.00th=[ 586], 95.00th=[ 660], 00:09:56.536 | 99.00th=[ 807], 99.50th=[ 1012], 99.90th=[ 1450], 99.95th=[41681], 00:09:56.536 | 99.99th=[41681] 00:09:56.536 write: IOPS=1223, BW=4893KiB/s (5011kB/s)(4908KiB/1003msec); 0 zone resets 00:09:56.536 slat (usec): min=7, max=34851, avg=49.75, stdev=994.40 00:09:56.536 clat (usec): min=184, max=724, avg=332.95, stdev=91.67 00:09:56.536 lat (usec): min=193, max=35150, avg=382.69, stdev=998.09 00:09:56.536 clat percentiles (usec): 00:09:56.536 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 239], 00:09:56.536 | 30.00th=[ 265], 40.00th=[ 310], 50.00th=[ 338], 60.00th=[ 363], 00:09:56.536 | 70.00th=[ 392], 80.00th=[ 424], 90.00th=[ 453], 95.00th=[ 478], 00:09:56.536 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 545], 99.95th=[ 725], 00:09:56.536 | 99.99th=[ 725] 00:09:56.536 bw ( KiB/s): min= 4096, max= 5712, per=27.55%, avg=4904.00, stdev=1142.68, samples=2 00:09:56.536 iops : min= 1024, max= 1428, avg=1226.00, stdev=285.67, samples=2 00:09:56.536 lat (usec) : 250=13.15%, 500=72.72%, 750=13.42%, 1000=0.44% 00:09:56.536 lat (msec) : 2=0.22%, 50=0.04% 00:09:56.536 cpu : usr=3.09%, sys=5.79%, ctx=2255, majf=0, minf=1 00:09:56.536 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.536 issued rwts: total=1024,1227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.536 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.536 job2: (groupid=0, jobs=1): err= 0: pid=1387914: Fri Jul 26 18:10:22 2024 00:09:56.536 read: IOPS=20, BW=81.3KiB/s (83.3kB/s)(84.0KiB/1033msec) 00:09:56.536 slat (nsec): min=13096, max=33101, avg=20210.38, stdev=8170.84 00:09:56.536 clat (usec): min=40844, max=41283, avg=40986.63, stdev=83.79 00:09:56.536 lat (usec): min=40876, max=41299, avg=41006.84, stdev=81.21 00:09:56.536 clat percentiles (usec): 00:09:56.536 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:56.536 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:56.536 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:56.536 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:56.536 | 99.99th=[41157] 00:09:56.536 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:09:56.536 slat (nsec): min=6504, max=70366, avg=18741.78, stdev=10595.30 00:09:56.536 clat (usec): min=196, max=538, avg=310.62, stdev=67.55 00:09:56.536 lat (usec): min=213, max=557, avg=329.36, stdev=68.07 00:09:56.536 clat percentiles (usec): 00:09:56.536 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 249], 00:09:56.536 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 310], 00:09:56.536 | 70.00th=[ 347], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 437], 00:09:56.537 | 99.00th=[ 482], 99.50th=[ 506], 99.90th=[ 537], 99.95th=[ 537], 00:09:56.537 | 99.99th=[ 537] 00:09:56.537 bw ( KiB/s): min= 4096, max= 4096, per=23.01%, avg=4096.00, stdev= 0.00, samples=1 00:09:56.537 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:56.537 lat (usec) : 250=19.89%, 500=75.61%, 750=0.56% 00:09:56.537 lat (msec) : 50=3.94% 00:09:56.537 cpu : usr=0.48%, sys=0.97%, ctx=534, majf=0, minf=1 00:09:56.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.537 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.537 job3: (groupid=0, jobs=1): err= 0: pid=1387915: Fri Jul 26 18:10:22 2024 00:09:56.537 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:56.537 slat (nsec): min=5784, max=51400, avg=13304.96, stdev=5351.71 00:09:56.537 clat (usec): min=281, max=3143, avg=337.28, stdev=74.41 00:09:56.537 lat (usec): min=288, max=3160, avg=350.59, stdev=74.85 00:09:56.537 clat percentiles (usec): 00:09:56.537 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 326], 00:09:56.537 | 30.00th=[ 330], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:09:56.537 | 70.00th=[ 343], 80.00th=[ 347], 90.00th=[ 351], 95.00th=[ 359], 00:09:56.537 | 99.00th=[ 383], 99.50th=[ 412], 99.90th=[ 873], 99.95th=[ 3130], 00:09:56.537 | 99.99th=[ 3130] 00:09:56.537 write: IOPS=1832, BW=7329KiB/s (7505kB/s)(7336KiB/1001msec); 0 zone resets 00:09:56.537 slat (nsec): min=7475, max=65612, avg=14993.38, stdev=6922.45 00:09:56.537 clat (usec): min=177, max=3113, avg=228.86, stdev=71.80 00:09:56.537 lat (usec): min=186, max=3122, avg=243.85, stdev=73.24 00:09:56.537 clat percentiles (usec): 00:09:56.537 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 204], 00:09:56.537 | 30.00th=[ 212], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:09:56.537 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 269], 00:09:56.537 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 490], 99.95th=[ 3130], 00:09:56.537 | 99.99th=[ 3130] 00:09:56.537 bw ( KiB/s): min= 8192, max= 8192, per=46.02%, avg=8192.00, stdev= 0.00, samples=1 00:09:56.537 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:56.537 lat (usec) : 250=46.08%, 500=53.83%, 1000=0.03% 00:09:56.537 lat (msec) : 4=0.06% 00:09:56.537 cpu : usr=3.40%, sys=6.60%, ctx=3372, majf=0, minf=1 00:09:56.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.537 issued rwts: total=1536,1834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.537 00:09:56.537 Run status group 0 (all jobs): 00:09:56.537 READ: bw=13.5MiB/s (14.2MB/s), 81.3KiB/s-6138KiB/s (83.3kB/s-6285kB/s), io=13.9MiB (14.6MB), run=1001-1033msec 00:09:56.537 WRITE: bw=17.4MiB/s (18.2MB/s), 1983KiB/s-7329KiB/s (2030kB/s-7505kB/s), io=18.0MiB (18.8MB), run=1001-1033msec 00:09:56.537 00:09:56.537 Disk stats (read/write): 00:09:56.537 nvme0n1: ios=996/1024, merge=0/0, ticks=1103/285, in_queue=1388, util=98.50% 00:09:56.537 nvme0n2: ios=845/1024, merge=0/0, ticks=1412/349, in_queue=1761, util=97.87% 00:09:56.537 nvme0n3: ios=16/512, merge=0/0, ticks=657/149, in_queue=806, util=88.83% 00:09:56.537 nvme0n4: ios=1315/1536, merge=0/0, ticks=1335/348, in_queue=1683, util=98.11% 00:09:56.537 18:10:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:56.537 [global] 00:09:56.537 thread=1 00:09:56.537 invalidate=1 00:09:56.537 rw=write 00:09:56.537 time_based=1 00:09:56.537 runtime=1 00:09:56.537 ioengine=libaio 00:09:56.537 direct=1 00:09:56.537 bs=4096 00:09:56.537 iodepth=128 00:09:56.537 norandommap=0 00:09:56.537 numjobs=1 00:09:56.537 00:09:56.537 verify_dump=1 00:09:56.537 verify_backlog=512 00:09:56.537 verify_state_save=0 00:09:56.537 do_verify=1 00:09:56.537 verify=crc32c-intel 00:09:56.537 [job0] 00:09:56.537 filename=/dev/nvme0n1 00:09:56.537 [job1] 00:09:56.537 filename=/dev/nvme0n2 00:09:56.537 [job2] 00:09:56.537 filename=/dev/nvme0n3 00:09:56.537 [job3] 00:09:56.537 filename=/dev/nvme0n4 00:09:56.537 Could not set queue depth (nvme0n1) 00:09:56.537 Could not set queue depth (nvme0n2) 00:09:56.537 Could not set queue depth (nvme0n3) 00:09:56.537 Could not set queue depth (nvme0n4) 00:09:56.795 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.795 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.795 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.795 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.795 fio-3.35 00:09:56.795 Starting 4 threads 00:09:58.169 00:09:58.169 job0: (groupid=0, jobs=1): err= 0: pid=1388147: Fri Jul 26 18:10:23 2024 00:09:58.169 read: IOPS=5813, BW=22.7MiB/s (23.8MB/s)(22.9MiB/1007msec) 00:09:58.169 slat (usec): min=2, max=12361, avg=81.08, stdev=600.99 00:09:58.169 clat (usec): min=1378, max=56053, avg=11121.58, stdev=4177.90 00:09:58.169 lat (usec): min=1402, max=56059, avg=11202.66, stdev=4205.22 00:09:58.169 clat percentiles (usec): 00:09:58.169 | 1.00th=[ 4047], 5.00th=[ 7046], 10.00th=[ 8291], 20.00th=[ 8979], 00:09:58.169 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:09:58.169 | 70.00th=[10945], 80.00th=[13173], 90.00th=[15139], 95.00th=[17957], 00:09:58.169 | 99.00th=[30016], 99.50th=[32900], 99.90th=[33162], 99.95th=[55837], 00:09:58.169 | 99.99th=[55837] 00:09:58.169 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:09:58.169 slat (usec): min=4, max=9925, avg=69.04, stdev=431.15 00:09:58.169 clat (usec): min=774, max=56038, avg=10199.49, stdev=5456.36 00:09:58.169 lat (usec): min=787, max=56047, avg=10268.54, stdev=5478.97 00:09:58.169 clat percentiles (usec): 00:09:58.169 | 1.00th=[ 2802], 5.00th=[ 5080], 10.00th=[ 5932], 20.00th=[ 6915], 00:09:58.169 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10683], 00:09:58.169 | 70.00th=[11076], 80.00th=[11338], 90.00th=[12387], 95.00th=[13566], 00:09:58.169 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:09:58.169 | 99.99th=[55837] 00:09:58.169 bw ( KiB/s): min=22864, max=26288, per=41.22%, avg=24576.00, stdev=2421.13, samples=2 00:09:58.169 iops : min= 5716, max= 6572, avg=6144.00, stdev=605.28, samples=2 00:09:58.169 lat (usec) : 1000=0.05% 00:09:58.169 lat (msec) : 2=0.40%, 4=1.63%, 10=43.86%, 20=50.99%, 50=3.02% 00:09:58.169 lat (msec) : 100=0.06% 00:09:58.169 cpu : usr=6.06%, sys=11.13%, ctx=568, majf=0, minf=1 00:09:58.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:58.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.169 issued rwts: total=5854,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.169 job1: (groupid=0, jobs=1): err= 0: pid=1388148: Fri Jul 26 18:10:23 2024 00:09:58.169 read: IOPS=1036, BW=4146KiB/s (4246kB/s)(4200KiB/1013msec) 00:09:58.169 slat (usec): min=3, max=45956, avg=388.37, stdev=2898.70 00:09:58.169 clat (msec): min=12, max=118, avg=43.09, stdev=26.18 00:09:58.169 lat (msec): min=14, max=118, avg=43.48, stdev=26.38 00:09:58.169 clat percentiles (msec): 00:09:58.169 | 1.00th=[ 17], 5.00th=[ 19], 10.00th=[ 19], 20.00th=[ 20], 00:09:58.169 | 30.00th=[ 21], 40.00th=[ 22], 50.00th=[ 33], 60.00th=[ 51], 00:09:58.169 | 70.00th=[ 61], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 93], 00:09:58.169 | 99.00th=[ 94], 99.50th=[ 94], 99.90th=[ 106], 99.95th=[ 120], 00:09:58.169 | 99.99th=[ 120] 00:09:58.169 write: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec); 0 zone resets 00:09:58.169 slat (usec): min=4, max=32168, avg=378.06, stdev=2228.70 00:09:58.169 clat (msec): min=16, max=104, avg=52.63, stdev=24.93 00:09:58.169 lat (msec): min=16, max=105, avg=53.01, stdev=25.14 00:09:58.169 clat percentiles (msec): 00:09:58.169 | 1.00th=[ 20], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 24], 00:09:58.169 | 30.00th=[ 25], 40.00th=[ 37], 50.00th=[ 62], 60.00th=[ 65], 00:09:58.169 | 70.00th=[ 73], 80.00th=[ 78], 90.00th=[ 83], 95.00th=[ 87], 00:09:58.169 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 105], 99.95th=[ 106], 00:09:58.169 | 99.99th=[ 106] 00:09:58.169 bw ( KiB/s): min= 4096, max= 7384, per=9.63%, avg=5740.00, stdev=2324.97, samples=2 00:09:58.169 iops : min= 1024, max= 1846, avg=1435.00, stdev=581.24, samples=2 00:09:58.169 lat (msec) : 20=11.56%, 50=39.17%, 100=49.03%, 250=0.23% 00:09:58.169 cpu : usr=2.08%, sys=1.88%, ctx=170, majf=0, minf=1 00:09:58.169 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:09:58.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.169 issued rwts: total=1050,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.169 job2: (groupid=0, jobs=1): err= 0: pid=1388149: Fri Jul 26 18:10:23 2024 00:09:58.169 read: IOPS=2512, BW=9.81MiB/s (10.3MB/s)(10.0MiB/1019msec) 00:09:58.169 slat (usec): min=3, max=30682, avg=188.90, stdev=1342.26 00:09:58.169 clat (usec): min=5638, max=93058, avg=21536.98, stdev=11025.31 00:09:58.169 lat (usec): min=5646, max=93065, avg=21725.88, stdev=11158.12 00:09:58.169 clat percentiles (usec): 00:09:58.169 | 1.00th=[ 6718], 5.00th=[12256], 10.00th=[13042], 20.00th=[13435], 00:09:58.169 | 30.00th=[14877], 40.00th=[15401], 50.00th=[18220], 60.00th=[22152], 00:09:58.169 | 70.00th=[23987], 80.00th=[26346], 90.00th=[32637], 95.00th=[40633], 00:09:58.169 | 99.00th=[70779], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:09:58.169 | 99.99th=[92799] 00:09:58.169 write: IOPS=2642, BW=10.3MiB/s (10.8MB/s)(10.5MiB/1019msec); 0 zone resets 00:09:58.169 slat (usec): min=4, max=16141, avg=182.83, stdev=1001.85 00:09:58.169 clat (msec): min=3, max=120, avg=27.50, stdev=22.51 00:09:58.169 lat (msec): min=3, max=120, avg=27.68, stdev=22.64 00:09:58.169 clat percentiles (msec): 00:09:58.169 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:09:58.169 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 23], 60.00th=[ 24], 00:09:58.169 | 70.00th=[ 26], 80.00th=[ 28], 90.00th=[ 63], 95.00th=[ 83], 00:09:58.169 | 99.00th=[ 116], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:09:58.169 | 99.99th=[ 121] 00:09:58.169 bw ( KiB/s): min=10240, max=10372, per=17.29%, avg=10306.00, stdev=93.34, samples=2 00:09:58.169 iops : min= 2560, max= 2593, avg=2576.50, stdev=23.33, samples=2 00:09:58.169 lat (msec) : 4=0.23%, 10=3.16%, 20=42.79%, 50=46.70%, 100=5.63% 00:09:58.169 lat (msec) : 250=1.48% 00:09:58.169 cpu : usr=3.05%, sys=5.30%, ctx=265, majf=0, minf=1 00:09:58.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:58.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.169 issued rwts: total=2560,2693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.169 job3: (groupid=0, jobs=1): err= 0: pid=1388150: Fri Jul 26 18:10:23 2024 00:09:58.170 read: IOPS=4168, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1012msec) 00:09:58.170 slat (usec): min=2, max=25259, avg=108.29, stdev=895.76 00:09:58.170 clat (usec): min=1531, max=44355, avg=14868.73, stdev=6442.08 00:09:58.170 lat (usec): min=1544, max=53681, avg=14977.01, stdev=6513.97 00:09:58.170 clat percentiles (usec): 00:09:58.170 | 1.00th=[ 2638], 5.00th=[ 5473], 10.00th=[ 7439], 20.00th=[11207], 00:09:58.170 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13829], 60.00th=[14091], 00:09:58.170 | 70.00th=[15139], 80.00th=[18744], 90.00th=[23725], 95.00th=[27919], 00:09:58.170 | 99.00th=[34866], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:09:58.170 | 99.99th=[44303] 00:09:58.170 write: IOPS=4757, BW=18.6MiB/s (19.5MB/s)(18.8MiB/1012msec); 0 zone resets 00:09:58.170 slat (usec): min=3, max=25237, avg=86.94, stdev=747.61 00:09:58.170 clat (usec): min=585, max=86779, avg=13527.70, stdev=8011.06 00:09:58.170 lat (usec): min=599, max=86784, avg=13614.65, stdev=8026.85 00:09:58.170 clat percentiles (usec): 00:09:58.170 | 1.00th=[ 1680], 5.00th=[ 4948], 10.00th=[ 5932], 20.00th=[ 7767], 00:09:58.170 | 30.00th=[ 9634], 40.00th=[11994], 50.00th=[12911], 60.00th=[13566], 00:09:58.170 | 70.00th=[14484], 80.00th=[16450], 90.00th=[20841], 95.00th=[27132], 00:09:58.170 | 99.00th=[45876], 99.50th=[65799], 99.90th=[76022], 99.95th=[76022], 00:09:58.170 | 99.99th=[86508] 00:09:58.170 bw ( KiB/s): min=16384, max=21280, per=31.59%, avg=18832.00, stdev=3461.99, samples=2 00:09:58.170 iops : min= 4096, max= 5320, avg=4708.00, stdev=865.50, samples=2 00:09:58.170 lat (usec) : 750=0.09%, 1000=0.07% 00:09:58.170 lat (msec) : 2=0.83%, 4=1.12%, 10=20.75%, 20=62.43%, 50=14.20% 00:09:58.170 lat (msec) : 100=0.51% 00:09:58.170 cpu : usr=3.56%, sys=5.54%, ctx=331, majf=0, minf=1 00:09:58.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:58.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.170 issued rwts: total=4219,4815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.170 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.170 00:09:58.170 Run status group 0 (all jobs): 00:09:58.170 READ: bw=52.5MiB/s (55.0MB/s), 4146KiB/s-22.7MiB/s (4246kB/s-23.8MB/s), io=53.4MiB (56.0MB), run=1007-1019msec 00:09:58.170 WRITE: bw=58.2MiB/s (61.1MB/s), 6065KiB/s-23.8MiB/s (6211kB/s-25.0MB/s), io=59.3MiB (62.2MB), run=1007-1019msec 00:09:58.170 00:09:58.170 Disk stats (read/write): 00:09:58.170 nvme0n1: ios=4902/5120, merge=0/0, ticks=51726/49573, in_queue=101299, util=100.00% 00:09:58.170 nvme0n2: ios=1073/1311, merge=0/0, ticks=15706/19503, in_queue=35209, util=88.22% 00:09:58.170 nvme0n3: ios=2105/2423, merge=0/0, ticks=42117/61968, in_queue=104085, util=91.46% 00:09:58.170 nvme0n4: ios=3635/3927, merge=0/0, ticks=45168/38781, in_queue=83949, util=99.37% 00:09:58.170 18:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:58.170 [global] 00:09:58.170 thread=1 00:09:58.170 invalidate=1 00:09:58.170 rw=randwrite 00:09:58.170 time_based=1 00:09:58.170 runtime=1 00:09:58.170 ioengine=libaio 00:09:58.170 direct=1 00:09:58.170 bs=4096 00:09:58.170 iodepth=128 00:09:58.170 norandommap=0 00:09:58.170 numjobs=1 00:09:58.170 00:09:58.170 verify_dump=1 00:09:58.170 verify_backlog=512 00:09:58.170 verify_state_save=0 00:09:58.170 do_verify=1 00:09:58.170 verify=crc32c-intel 00:09:58.170 [job0] 00:09:58.170 filename=/dev/nvme0n1 00:09:58.170 [job1] 00:09:58.170 filename=/dev/nvme0n2 00:09:58.170 [job2] 00:09:58.170 filename=/dev/nvme0n3 00:09:58.170 [job3] 00:09:58.170 filename=/dev/nvme0n4 00:09:58.170 Could not set queue depth (nvme0n1) 00:09:58.170 Could not set queue depth (nvme0n2) 00:09:58.170 Could not set queue depth (nvme0n3) 00:09:58.170 Could not set queue depth (nvme0n4) 00:09:58.170 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.170 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.170 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.170 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.170 fio-3.35 00:09:58.170 Starting 4 threads 00:09:59.571 00:09:59.571 job0: (groupid=0, jobs=1): err= 0: pid=1388374: Fri Jul 26 18:10:25 2024 00:09:59.571 read: IOPS=3549, BW=13.9MiB/s (14.5MB/s)(13.9MiB/1005msec) 00:09:59.571 slat (usec): min=3, max=11583, avg=142.12, stdev=722.19 00:09:59.571 clat (usec): min=3681, max=34819, avg=18098.69, stdev=6447.17 00:09:59.571 lat (usec): min=4009, max=34836, avg=18240.81, stdev=6479.71 00:09:59.571 clat percentiles (usec): 00:09:59.571 | 1.00th=[ 9110], 5.00th=[10683], 10.00th=[11338], 20.00th=[11863], 00:09:59.571 | 30.00th=[12911], 40.00th=[13698], 50.00th=[17695], 60.00th=[20579], 00:09:59.571 | 70.00th=[22414], 80.00th=[23462], 90.00th=[25560], 95.00th=[31327], 00:09:59.571 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:09:59.571 | 99.99th=[34866] 00:09:59.571 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:09:59.571 slat (usec): min=4, max=7421, avg=127.62, stdev=660.00 00:09:59.571 clat (usec): min=7017, max=31026, avg=17364.31, stdev=4275.58 00:09:59.571 lat (usec): min=7040, max=31050, avg=17491.93, stdev=4260.71 00:09:59.571 clat percentiles (usec): 00:09:59.571 | 1.00th=[ 9634], 5.00th=[10945], 10.00th=[11207], 20.00th=[13566], 00:09:59.571 | 30.00th=[15795], 40.00th=[16450], 50.00th=[16712], 60.00th=[17171], 00:09:59.571 | 70.00th=[19006], 80.00th=[21890], 90.00th=[22414], 95.00th=[24773], 00:09:59.571 | 99.00th=[29230], 99.50th=[29492], 99.90th=[31065], 99.95th=[31065], 00:09:59.571 | 99.99th=[31065] 00:09:59.571 bw ( KiB/s): min=12288, max=16384, per=21.12%, avg=14336.00, stdev=2896.31, samples=2 00:09:59.571 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:59.571 lat (msec) : 4=0.01%, 10=1.86%, 20=63.40%, 50=34.72% 00:09:59.571 cpu : usr=5.07%, sys=6.37%, ctx=322, majf=0, minf=7 00:09:59.571 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:59.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.571 issued rwts: total=3567,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.571 job1: (groupid=0, jobs=1): err= 0: pid=1388375: Fri Jul 26 18:10:25 2024 00:09:59.571 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:09:59.571 slat (usec): min=2, max=11786, avg=98.33, stdev=699.14 00:09:59.571 clat (usec): min=4075, max=55791, avg=12873.15, stdev=4461.79 00:09:59.571 lat (usec): min=4082, max=55813, avg=12971.48, stdev=4521.67 00:09:59.571 clat percentiles (usec): 00:09:59.571 | 1.00th=[ 5276], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[10683], 00:09:59.571 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11600], 60.00th=[12387], 00:09:59.571 | 70.00th=[13304], 80.00th=[14746], 90.00th=[17171], 95.00th=[19006], 00:09:59.571 | 99.00th=[34866], 99.50th=[41681], 99.90th=[49021], 99.95th=[55837], 00:09:59.571 | 99.99th=[55837] 00:09:59.571 write: IOPS=5320, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1009msec); 0 zone resets 00:09:59.571 slat (usec): min=3, max=14650, avg=80.09, stdev=494.09 00:09:59.571 clat (usec): min=1874, max=55817, avg=11353.30, stdev=5670.25 00:09:59.571 lat (usec): min=1916, max=55843, avg=11433.39, stdev=5693.51 00:09:59.571 clat percentiles (usec): 00:09:59.571 | 1.00th=[ 3720], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7832], 00:09:59.571 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11338], 60.00th=[11469], 00:09:59.571 | 70.00th=[11731], 80.00th=[11994], 90.00th=[13829], 95.00th=[18744], 00:09:59.571 | 99.00th=[45876], 99.50th=[46924], 99.90th=[52691], 99.95th=[52691], 00:09:59.571 | 99.99th=[55837] 00:09:59.571 bw ( KiB/s): min=20464, max=21456, per=30.88%, avg=20960.00, stdev=701.45, samples=2 00:09:59.571 iops : min= 5116, max= 5364, avg=5240.00, stdev=175.36, samples=2 00:09:59.571 lat (msec) : 2=0.02%, 4=0.77%, 10=18.40%, 20=77.00%, 50=3.67% 00:09:59.571 lat (msec) : 100=0.13% 00:09:59.571 cpu : usr=6.94%, sys=9.62%, ctx=538, majf=0, minf=19 00:09:59.571 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:59.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.571 issued rwts: total=5120,5368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.571 job2: (groupid=0, jobs=1): err= 0: pid=1388376: Fri Jul 26 18:10:25 2024 00:09:59.571 read: IOPS=2573, BW=10.1MiB/s (10.5MB/s)(10.1MiB/1004msec) 00:09:59.571 slat (usec): min=2, max=22665, avg=203.36, stdev=1224.67 00:09:59.571 clat (usec): min=2881, max=57921, avg=26261.09, stdev=7219.85 00:09:59.571 lat (usec): min=3661, max=76249, avg=26464.44, stdev=7267.61 00:09:59.571 clat percentiles (usec): 00:09:59.571 | 1.00th=[15401], 5.00th=[16909], 10.00th=[20579], 20.00th=[21627], 00:09:59.571 | 30.00th=[22414], 40.00th=[23200], 50.00th=[24249], 60.00th=[26084], 00:09:59.571 | 70.00th=[28443], 80.00th=[32113], 90.00th=[34341], 95.00th=[41157], 00:09:59.571 | 99.00th=[53740], 99.50th=[54264], 99.90th=[57934], 99.95th=[57934], 00:09:59.571 | 99.99th=[57934] 00:09:59.571 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:09:59.571 slat (usec): min=3, max=6763, avg=146.49, stdev=695.94 00:09:59.571 clat (usec): min=5464, max=42157, avg=19144.99, stdev=5879.35 00:09:59.571 lat (usec): min=5742, max=42162, avg=19291.49, stdev=5898.55 00:09:59.571 clat percentiles (usec): 00:09:59.571 | 1.00th=[ 8979], 5.00th=[11600], 10.00th=[13042], 20.00th=[14222], 00:09:59.571 | 30.00th=[15139], 40.00th=[16712], 50.00th=[17171], 60.00th=[20055], 00:09:59.571 | 70.00th=[21627], 80.00th=[22414], 90.00th=[28705], 95.00th=[32637], 00:09:59.571 | 99.00th=[33424], 99.50th=[34341], 99.90th=[41681], 99.95th=[42206], 00:09:59.571 | 99.99th=[42206] 00:09:59.571 bw ( KiB/s): min=11456, max=12288, per=17.49%, avg=11872.00, stdev=588.31, samples=2 00:09:59.571 iops : min= 2864, max= 3072, avg=2968.00, stdev=147.08, samples=2 00:09:59.571 lat (msec) : 4=0.16%, 10=1.27%, 20=35.98%, 50=61.97%, 100=0.62% 00:09:59.571 cpu : usr=3.39%, sys=4.69%, ctx=297, majf=0, minf=17 00:09:59.571 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:59.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.571 issued rwts: total=2584,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.571 job3: (groupid=0, jobs=1): err= 0: pid=1388377: Fri Jul 26 18:10:25 2024 00:09:59.571 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:09:59.571 slat (usec): min=3, max=7926, avg=94.41, stdev=530.93 00:09:59.571 clat (usec): min=3218, max=23204, avg=13138.13, stdev=2181.08 00:09:59.571 lat (usec): min=3229, max=26246, avg=13232.54, stdev=2207.07 00:09:59.571 clat percentiles (usec): 00:09:59.571 | 1.00th=[ 6194], 5.00th=[10159], 10.00th=[11338], 20.00th=[12125], 00:09:59.571 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:09:59.571 | 70.00th=[13435], 80.00th=[14222], 90.00th=[15664], 95.00th=[17433], 00:09:59.571 | 99.00th=[20579], 99.50th=[22938], 99.90th=[23200], 99.95th=[23200], 00:09:59.571 | 99.99th=[23200] 00:09:59.571 write: IOPS=5063, BW=19.8MiB/s (20.7MB/s)(19.9MiB/1007msec); 0 zone resets 00:09:59.571 slat (usec): min=4, max=13364, avg=96.57, stdev=537.94 00:09:59.571 clat (usec): min=479, max=33342, avg=13178.47, stdev=2555.25 00:09:59.571 lat (usec): min=1759, max=33347, avg=13275.04, stdev=2550.49 00:09:59.571 clat percentiles (usec): 00:09:59.571 | 1.00th=[ 5866], 5.00th=[ 9765], 10.00th=[11338], 20.00th=[11994], 00:09:59.571 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:09:59.571 | 70.00th=[13698], 80.00th=[14615], 90.00th=[16712], 95.00th=[18220], 00:09:59.571 | 99.00th=[20055], 99.50th=[21890], 99.90th=[23200], 99.95th=[33424], 00:09:59.571 | 99.99th=[33424] 00:09:59.571 bw ( KiB/s): min=19288, max=20480, per=29.29%, avg=19884.00, stdev=842.87, samples=2 00:09:59.571 iops : min= 4822, max= 5120, avg=4971.00, stdev=210.72, samples=2 00:09:59.571 lat (usec) : 500=0.01% 00:09:59.571 lat (msec) : 2=0.03%, 4=0.06%, 10=5.85%, 20=92.45%, 50=1.60% 00:09:59.571 cpu : usr=5.17%, sys=8.55%, ctx=459, majf=0, minf=9 00:09:59.571 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:59.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.571 issued rwts: total=4608,5099,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.571 00:09:59.571 Run status group 0 (all jobs): 00:09:59.571 READ: bw=61.5MiB/s (64.5MB/s), 10.1MiB/s-19.8MiB/s (10.5MB/s-20.8MB/s), io=62.0MiB (65.0MB), run=1004-1009msec 00:09:59.572 WRITE: bw=66.3MiB/s (69.5MB/s), 12.0MiB/s-20.8MiB/s (12.5MB/s-21.8MB/s), io=66.9MiB (70.1MB), run=1004-1009msec 00:09:59.572 00:09:59.572 Disk stats (read/write): 00:09:59.572 nvme0n1: ios=2924/3072, merge=0/0, ticks=13912/12152, in_queue=26064, util=93.89% 00:09:59.572 nvme0n2: ios=4140/4591, merge=0/0, ticks=51270/51022, in_queue=102292, util=97.16% 00:09:59.572 nvme0n3: ios=2138/2560, merge=0/0, ticks=19857/14168, in_queue=34025, util=89.36% 00:09:59.572 nvme0n4: ios=4118/4246, merge=0/0, ticks=21676/18494, in_queue=40170, util=97.90% 00:09:59.572 18:10:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:59.572 18:10:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1388517 00:09:59.572 18:10:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:59.572 18:10:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:59.572 [global] 00:09:59.572 thread=1 00:09:59.572 invalidate=1 00:09:59.572 rw=read 00:09:59.572 time_based=1 00:09:59.572 runtime=10 00:09:59.572 ioengine=libaio 00:09:59.572 direct=1 00:09:59.572 bs=4096 00:09:59.572 iodepth=1 00:09:59.572 norandommap=1 00:09:59.572 numjobs=1 00:09:59.572 00:09:59.572 [job0] 00:09:59.572 filename=/dev/nvme0n1 00:09:59.572 [job1] 00:09:59.572 filename=/dev/nvme0n2 00:09:59.572 [job2] 00:09:59.572 filename=/dev/nvme0n3 00:09:59.572 [job3] 00:09:59.572 filename=/dev/nvme0n4 00:09:59.572 Could not set queue depth (nvme0n1) 00:09:59.572 Could not set queue depth (nvme0n2) 00:09:59.572 Could not set queue depth (nvme0n3) 00:09:59.572 Could not set queue depth (nvme0n4) 00:09:59.572 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.572 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.572 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.572 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.572 fio-3.35 00:09:59.572 Starting 4 threads 00:10:02.858 18:10:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:02.858 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=32251904, buflen=4096 00:10:02.858 fio: pid=1388619, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:02.858 18:10:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:02.858 18:10:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.858 18:10:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:02.858 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=6324224, buflen=4096 00:10:02.858 fio: pid=1388613, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:03.117 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.117 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:03.117 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=8011776, buflen=4096 00:10:03.117 fio: pid=1388609, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:03.375 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=39133184, buflen=4096 00:10:03.375 fio: pid=1388610, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:03.375 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.375 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:03.633 00:10:03.634 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1388609: Fri Jul 26 18:10:29 2024 00:10:03.634 read: IOPS=566, BW=2266KiB/s (2320kB/s)(7824KiB/3453msec) 00:10:03.634 slat (usec): min=5, max=20863, avg=36.12, stdev=623.78 00:10:03.634 clat (usec): min=275, max=42364, avg=1714.23, stdev=7106.63 00:10:03.634 lat (usec): min=281, max=42378, avg=1750.36, stdev=7131.49 00:10:03.634 clat percentiles (usec): 00:10:03.634 | 1.00th=[ 297], 5.00th=[ 318], 10.00th=[ 338], 20.00th=[ 359], 00:10:03.634 | 30.00th=[ 375], 40.00th=[ 392], 50.00th=[ 416], 60.00th=[ 441], 00:10:03.634 | 70.00th=[ 478], 80.00th=[ 529], 90.00th=[ 619], 95.00th=[ 734], 00:10:03.634 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:03.634 | 99.99th=[42206] 00:10:03.634 bw ( KiB/s): min= 96, max= 5664, per=11.16%, avg=2524.00, stdev=2616.91, samples=6 00:10:03.634 iops : min= 24, max= 1416, avg=631.00, stdev=654.23, samples=6 00:10:03.634 lat (usec) : 500=74.91%, 750=20.49%, 1000=1.23% 00:10:03.634 lat (msec) : 2=0.20%, 50=3.12% 00:10:03.634 cpu : usr=0.70%, sys=0.93%, ctx=1961, majf=0, minf=1 00:10:03.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.634 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.634 issued rwts: total=1957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.634 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1388610: Fri Jul 26 18:10:29 2024 00:10:03.634 read: IOPS=2581, BW=10.1MiB/s (10.6MB/s)(37.3MiB/3702msec) 00:10:03.634 slat (usec): min=4, max=15682, avg=21.97, stdev=327.20 00:10:03.634 clat (usec): min=263, max=5847, avg=359.50, stdev=89.55 00:10:03.634 lat (usec): min=270, max=16032, avg=381.47, stdev=339.62 00:10:03.634 clat percentiles (usec): 00:10:03.634 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 310], 00:10:03.634 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 351], 00:10:03.634 | 70.00th=[ 363], 80.00th=[ 392], 90.00th=[ 465], 95.00th=[ 506], 00:10:03.634 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 709], 99.95th=[ 807], 00:10:03.634 | 99.99th=[ 5866] 00:10:03.634 bw ( KiB/s): min= 8992, max=10864, per=45.74%, avg=10345.00, stdev=631.76, samples=7 00:10:03.634 iops : min= 2248, max= 2716, avg=2586.14, stdev=157.90, samples=7 00:10:03.634 lat (usec) : 500=94.24%, 750=5.65%, 1000=0.07% 00:10:03.634 lat (msec) : 2=0.01%, 10=0.01% 00:10:03.634 cpu : usr=1.92%, sys=4.81%, ctx=9563, majf=0, minf=1 00:10:03.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.634 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.634 issued rwts: total=9555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.634 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1388613: Fri Jul 26 18:10:29 2024 00:10:03.634 read: IOPS=488, BW=1951KiB/s (1998kB/s)(6176KiB/3165msec) 00:10:03.634 slat (usec): min=5, max=12855, avg=23.15, stdev=326.79 00:10:03.634 clat (usec): min=351, max=42476, avg=2007.74, stdev=7642.44 00:10:03.634 lat (usec): min=357, max=55107, avg=2030.88, stdev=7693.85 00:10:03.634 clat percentiles (usec): 00:10:03.634 | 1.00th=[ 367], 5.00th=[ 388], 10.00th=[ 408], 20.00th=[ 441], 00:10:03.634 | 30.00th=[ 474], 40.00th=[ 502], 50.00th=[ 529], 60.00th=[ 553], 00:10:03.634 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 668], 95.00th=[ 758], 00:10:03.634 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:10:03.634 | 99.99th=[42730] 00:10:03.634 bw ( KiB/s): min= 96, max= 6176, per=9.07%, avg=2052.00, stdev=2423.80, samples=6 00:10:03.634 iops : min= 24, max= 1544, avg=513.00, stdev=605.95, samples=6 00:10:03.634 lat (usec) : 500=40.00%, 750=54.82%, 1000=1.42% 00:10:03.634 lat (msec) : 2=0.06%, 50=3.62% 00:10:03.634 cpu : usr=0.47%, sys=1.07%, ctx=1546, majf=0, minf=1 00:10:03.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.634 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.634 issued rwts: total=1545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.634 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1388619: Fri Jul 26 18:10:29 2024 00:10:03.634 read: IOPS=2726, BW=10.6MiB/s (11.2MB/s)(30.8MiB/2888msec) 00:10:03.634 slat (nsec): min=5385, max=65397, avg=12296.85, stdev=6851.12 00:10:03.634 clat (usec): min=277, max=1238, avg=350.07, stdev=54.78 00:10:03.634 lat (usec): min=282, max=1247, avg=362.37, stdev=56.71 00:10:03.634 clat percentiles (usec): 00:10:03.634 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 310], 00:10:03.634 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:10:03.634 | 70.00th=[ 355], 80.00th=[ 383], 90.00th=[ 424], 95.00th=[ 478], 00:10:03.634 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 627], 99.95th=[ 644], 00:10:03.634 | 99.99th=[ 1237] 00:10:03.634 bw ( KiB/s): min= 9544, max=11584, per=47.55%, avg=10753.60, stdev=951.15, samples=5 00:10:03.634 iops : min= 2386, max= 2896, avg=2688.40, stdev=237.79, samples=5 00:10:03.634 lat (usec) : 500=97.12%, 750=2.86% 00:10:03.634 lat (msec) : 2=0.01% 00:10:03.634 cpu : usr=2.08%, sys=5.13%, ctx=7875, majf=0, minf=1 00:10:03.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.634 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.634 issued rwts: total=7875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.634 00:10:03.634 Run status group 0 (all jobs): 00:10:03.634 READ: bw=22.1MiB/s (23.2MB/s), 1951KiB/s-10.6MiB/s (1998kB/s-11.2MB/s), io=81.8MiB (85.7MB), run=2888-3702msec 00:10:03.634 00:10:03.634 Disk stats (read/write): 00:10:03.634 nvme0n1: ios=1954/0, merge=0/0, ticks=3249/0, in_queue=3249, util=94.65% 00:10:03.634 nvme0n2: ios=9352/0, merge=0/0, ticks=3463/0, in_queue=3463, util=97.78% 00:10:03.634 nvme0n3: ios=1543/0, merge=0/0, ticks=3050/0, in_queue=3050, util=96.38% 00:10:03.634 nvme0n4: ios=7788/0, merge=0/0, ticks=2623/0, in_queue=2623, util=96.78% 00:10:03.634 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.634 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:03.891 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.891 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:04.148 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:04.148 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:04.406 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:04.406 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:04.665 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:04.665 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1388517 00:10:04.665 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:04.665 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.923 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.923 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:04.923 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:04.923 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.923 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:04.923 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.923 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:04.923 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:04.923 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:04.923 nvmf hotplug test: fio failed as expected 00:10:04.923 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.180 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.181 rmmod nvme_tcp 00:10:05.181 rmmod nvme_fabrics 00:10:05.181 rmmod nvme_keyring 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1386602 ']' 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1386602 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1386602 ']' 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1386602 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1386602 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1386602' 00:10:05.181 killing process with pid 1386602 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1386602 00:10:05.181 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1386602 00:10:05.439 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:05.439 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:05.439 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:05.439 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.439 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.439 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.439 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.439 18:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:07.978 00:10:07.978 real 0m23.285s 00:10:07.978 user 1m20.185s 00:10:07.978 sys 0m7.597s 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.978 ************************************ 00:10:07.978 END TEST nvmf_fio_target 00:10:07.978 ************************************ 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.978 ************************************ 00:10:07.978 START TEST nvmf_bdevio 00:10:07.978 ************************************ 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:07.978 * Looking for test storage... 00:10:07.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:07.978 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.979 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:07.979 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:07.979 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:07.979 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.979 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.979 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.979 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:07.979 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:07.979 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:07.979 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.885 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:09.886 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:09.886 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:09.886 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:09.886 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:09.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:10:09.886 00:10:09.886 --- 10.0.0.2 ping statistics --- 00:10:09.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.886 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:10:09.886 00:10:09.886 --- 10.0.0.1 ping statistics --- 00:10:09.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.886 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.886 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1391247 00:10:09.887 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:09.887 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1391247 00:10:09.887 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1391247 ']' 00:10:09.887 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.887 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.887 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.887 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.887 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.887 [2024-07-26 18:10:35.886424] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:09.887 [2024-07-26 18:10:35.886530] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.887 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.887 [2024-07-26 18:10:35.927292] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:09.887 [2024-07-26 18:10:35.955235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.145 [2024-07-26 18:10:36.043909] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.145 [2024-07-26 18:10:36.043970] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.145 [2024-07-26 18:10:36.043998] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.145 [2024-07-26 18:10:36.044010] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.146 [2024-07-26 18:10:36.044021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.146 [2024-07-26 18:10:36.044086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:10.146 [2024-07-26 18:10:36.044149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:10.146 [2024-07-26 18:10:36.044214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:10.146 [2024-07-26 18:10:36.044220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.146 [2024-07-26 18:10:36.203583] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.146 Malloc0 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.146 [2024-07-26 18:10:36.257158] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:10.146 { 00:10:10.146 "params": { 00:10:10.146 "name": "Nvme$subsystem", 00:10:10.146 "trtype": "$TEST_TRANSPORT", 00:10:10.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:10.146 "adrfam": "ipv4", 00:10:10.146 "trsvcid": "$NVMF_PORT", 00:10:10.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:10.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:10.146 "hdgst": ${hdgst:-false}, 00:10:10.146 "ddgst": ${ddgst:-false} 00:10:10.146 }, 00:10:10.146 "method": "bdev_nvme_attach_controller" 00:10:10.146 } 00:10:10.146 EOF 00:10:10.146 )") 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:10.146 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:10.146 "params": { 00:10:10.146 "name": "Nvme1", 00:10:10.146 "trtype": "tcp", 00:10:10.146 "traddr": "10.0.0.2", 00:10:10.146 "adrfam": "ipv4", 00:10:10.146 "trsvcid": "4420", 00:10:10.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:10.146 "hdgst": false, 00:10:10.146 "ddgst": false 00:10:10.146 }, 00:10:10.146 "method": "bdev_nvme_attach_controller" 00:10:10.146 }' 00:10:10.404 [2024-07-26 18:10:36.304527] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:10.404 [2024-07-26 18:10:36.304593] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391389 ] 00:10:10.404 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.404 [2024-07-26 18:10:36.336243] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:10.404 [2024-07-26 18:10:36.365365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:10.404 [2024-07-26 18:10:36.453754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.404 [2024-07-26 18:10:36.453805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.404 [2024-07-26 18:10:36.453807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.663 I/O targets: 00:10:10.663 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:10.663 00:10:10.663 00:10:10.663 CUnit - A unit testing framework for C - Version 2.1-3 00:10:10.663 http://cunit.sourceforge.net/ 00:10:10.663 00:10:10.663 00:10:10.663 Suite: bdevio tests on: Nvme1n1 00:10:10.921 Test: blockdev write read block ...passed 00:10:10.921 Test: blockdev write zeroes read block ...passed 00:10:10.921 Test: blockdev write zeroes read no split ...passed 00:10:10.921 Test: blockdev write zeroes read split ...passed 00:10:10.921 Test: blockdev write zeroes read split partial ...passed 00:10:10.921 Test: blockdev reset ...[2024-07-26 18:10:36.996978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:10.921 [2024-07-26 18:10:36.997091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb17940 (9): Bad file descriptor 00:10:11.181 [2024-07-26 18:10:37.099825] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:11.181 passed 00:10:11.181 Test: blockdev write read 8 blocks ...passed 00:10:11.181 Test: blockdev write read size > 128k ...passed 00:10:11.181 Test: blockdev write read invalid size ...passed 00:10:11.181 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:11.181 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:11.181 Test: blockdev write read max offset ...passed 00:10:11.181 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:11.181 Test: blockdev writev readv 8 blocks ...passed 00:10:11.181 Test: blockdev writev readv 30 x 1block ...passed 00:10:11.181 Test: blockdev writev readv block ...passed 00:10:11.181 Test: blockdev writev readv size > 128k ...passed 00:10:11.181 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:11.181 Test: blockdev comparev and writev ...[2024-07-26 18:10:37.318665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.181 [2024-07-26 18:10:37.318700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:11.181 [2024-07-26 18:10:37.318725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.181 [2024-07-26 18:10:37.318743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:11.181 [2024-07-26 18:10:37.319194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.181 [2024-07-26 18:10:37.319227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:11.181 [2024-07-26 18:10:37.319250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.181 [2024-07-26 18:10:37.319267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:11.181 [2024-07-26 18:10:37.319705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.181 [2024-07-26 18:10:37.319729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:11.181 [2024-07-26 18:10:37.319751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.181 [2024-07-26 18:10:37.319768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:11.181 [2024-07-26 18:10:37.320198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.181 [2024-07-26 18:10:37.320222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:11.181 [2024-07-26 18:10:37.320243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.181 [2024-07-26 18:10:37.320259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:11.442 passed 00:10:11.442 Test: blockdev nvme passthru rw ...passed 00:10:11.442 Test: blockdev nvme passthru vendor specific ...[2024-07-26 18:10:37.403437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:11.442 [2024-07-26 18:10:37.403464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:11.442 [2024-07-26 18:10:37.403692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:11.442 [2024-07-26 18:10:37.403715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:11.442 [2024-07-26 18:10:37.403941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:11.442 [2024-07-26 18:10:37.403965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:11.442 [2024-07-26 18:10:37.404199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:11.442 [2024-07-26 18:10:37.404222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:11.442 passed 00:10:11.442 Test: blockdev nvme admin passthru ...passed 00:10:11.442 Test: blockdev copy ...passed 00:10:11.442 00:10:11.442 Run Summary: Type Total Ran Passed Failed Inactive 00:10:11.442 suites 1 1 n/a 0 0 00:10:11.442 tests 23 23 23 0 0 00:10:11.442 asserts 152 152 152 0 n/a 00:10:11.442 00:10:11.442 Elapsed time = 1.343 seconds 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.700 rmmod nvme_tcp 00:10:11.700 rmmod nvme_fabrics 00:10:11.700 rmmod nvme_keyring 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1391247 ']' 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1391247 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1391247 ']' 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1391247 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1391247 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1391247' 00:10:11.700 killing process with pid 1391247 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1391247 00:10:11.700 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1391247 00:10:11.959 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.959 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:11.959 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:11.959 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.959 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.959 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.959 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.959 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:14.499 00:10:14.499 real 0m6.456s 00:10:14.499 user 0m11.076s 00:10:14.499 sys 0m2.118s 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.499 ************************************ 00:10:14.499 END TEST nvmf_bdevio 00:10:14.499 ************************************ 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:14.499 00:10:14.499 real 3m50.565s 00:10:14.499 user 9m56.677s 00:10:14.499 sys 1m8.124s 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.499 ************************************ 00:10:14.499 END TEST nvmf_target_core 00:10:14.499 ************************************ 00:10:14.499 18:10:40 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:14.499 18:10:40 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:14.499 18:10:40 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.499 18:10:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:14.499 ************************************ 00:10:14.499 START TEST nvmf_target_extra 00:10:14.499 ************************************ 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:14.499 * Looking for test storage... 00:10:14.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.499 18:10:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:14.500 ************************************ 00:10:14.500 START TEST nvmf_example 00:10:14.500 ************************************ 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:14.500 * Looking for test storage... 00:10:14.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:14.500 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.501 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.501 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.501 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:14.501 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:14.501 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:14.501 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:16.408 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:16.408 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:16.408 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:16.408 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:16.408 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:16.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:10:16.409 00:10:16.409 --- 10.0.0.2 ping statistics --- 00:10:16.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.409 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:10:16.409 00:10:16.409 --- 10.0.0.1 ping statistics --- 00:10:16.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.409 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1393510 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1393510 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1393510 ']' 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.409 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.409 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:17.343 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:17.343 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.573 Initializing NVMe Controllers 00:10:29.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:29.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:29.573 Initialization complete. Launching workers. 00:10:29.573 ======================================================== 00:10:29.573 Latency(us) 00:10:29.573 Device Information : IOPS MiB/s Average min max 00:10:29.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14956.60 58.42 4281.38 887.34 15410.94 00:10:29.573 ======================================================== 00:10:29.573 Total : 14956.60 58.42 4281.38 887.34 15410.94 00:10:29.573 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.573 rmmod nvme_tcp 00:10:29.573 rmmod nvme_fabrics 00:10:29.573 rmmod nvme_keyring 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1393510 ']' 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1393510 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1393510 ']' 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1393510 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1393510 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1393510' 00:10:29.573 killing process with pid 1393510 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1393510 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1393510 00:10:29.573 nvmf threads initialize successfully 00:10:29.573 bdev subsystem init successfully 00:10:29.573 created a nvmf target service 00:10:29.573 create targets's poll groups done 00:10:29.573 all subsystems of target started 00:10:29.573 nvmf target is running 00:10:29.573 all subsystems of target stopped 00:10:29.573 destroy targets's poll groups done 00:10:29.573 destroyed the nvmf target service 00:10:29.573 bdev subsystem finish successfully 00:10:29.573 nvmf threads destroy successfully 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.573 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.832 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:29.832 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:29.832 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:29.832 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.095 00:10:30.095 real 0m15.799s 00:10:30.095 user 0m45.043s 00:10:30.095 sys 0m3.233s 00:10:30.095 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.095 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.095 ************************************ 00:10:30.095 END TEST nvmf_example 00:10:30.095 ************************************ 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:30.095 ************************************ 00:10:30.095 START TEST nvmf_filesystem 00:10:30.095 ************************************ 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:30.095 * Looking for test storage... 00:10:30.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:30.095 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:30.096 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:30.096 #define SPDK_CONFIG_H 00:10:30.096 #define SPDK_CONFIG_APPS 1 00:10:30.096 #define SPDK_CONFIG_ARCH native 00:10:30.096 #undef SPDK_CONFIG_ASAN 00:10:30.096 #undef SPDK_CONFIG_AVAHI 00:10:30.096 #undef SPDK_CONFIG_CET 00:10:30.096 #define SPDK_CONFIG_COVERAGE 1 00:10:30.096 #define SPDK_CONFIG_CROSS_PREFIX 00:10:30.096 #undef SPDK_CONFIG_CRYPTO 00:10:30.097 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:30.097 #undef SPDK_CONFIG_CUSTOMOCF 00:10:30.097 #undef SPDK_CONFIG_DAOS 00:10:30.097 #define SPDK_CONFIG_DAOS_DIR 00:10:30.097 #define SPDK_CONFIG_DEBUG 1 00:10:30.097 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:30.097 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:30.097 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:30.097 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:30.097 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:30.097 #undef SPDK_CONFIG_DPDK_UADK 00:10:30.097 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:30.097 #define SPDK_CONFIG_EXAMPLES 1 00:10:30.097 #undef SPDK_CONFIG_FC 00:10:30.097 #define SPDK_CONFIG_FC_PATH 00:10:30.097 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:30.097 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:30.097 #undef SPDK_CONFIG_FUSE 00:10:30.097 #undef SPDK_CONFIG_FUZZER 00:10:30.097 #define SPDK_CONFIG_FUZZER_LIB 00:10:30.097 #undef SPDK_CONFIG_GOLANG 00:10:30.097 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:30.097 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:30.097 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:30.097 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:30.097 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:30.097 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:30.097 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:30.097 #define SPDK_CONFIG_IDXD 1 00:10:30.097 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:30.097 #undef SPDK_CONFIG_IPSEC_MB 00:10:30.097 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:30.097 #define SPDK_CONFIG_ISAL 1 00:10:30.097 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:30.097 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:30.097 #define SPDK_CONFIG_LIBDIR 00:10:30.097 #undef SPDK_CONFIG_LTO 00:10:30.097 #define SPDK_CONFIG_MAX_LCORES 128 00:10:30.097 #define SPDK_CONFIG_NVME_CUSE 1 00:10:30.097 #undef SPDK_CONFIG_OCF 00:10:30.097 #define SPDK_CONFIG_OCF_PATH 00:10:30.097 #define SPDK_CONFIG_OPENSSL_PATH 00:10:30.097 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:30.097 #define SPDK_CONFIG_PGO_DIR 00:10:30.097 #undef SPDK_CONFIG_PGO_USE 00:10:30.097 #define SPDK_CONFIG_PREFIX /usr/local 00:10:30.097 #undef SPDK_CONFIG_RAID5F 00:10:30.097 #undef SPDK_CONFIG_RBD 00:10:30.097 #define SPDK_CONFIG_RDMA 1 00:10:30.097 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:30.097 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:30.097 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:30.097 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:30.097 #define SPDK_CONFIG_SHARED 1 00:10:30.097 #undef SPDK_CONFIG_SMA 00:10:30.097 #define SPDK_CONFIG_TESTS 1 00:10:30.097 #undef SPDK_CONFIG_TSAN 00:10:30.097 #define SPDK_CONFIG_UBLK 1 00:10:30.097 #define SPDK_CONFIG_UBSAN 1 00:10:30.097 #undef SPDK_CONFIG_UNIT_TESTS 00:10:30.097 #undef SPDK_CONFIG_URING 00:10:30.097 #define SPDK_CONFIG_URING_PATH 00:10:30.097 #undef SPDK_CONFIG_URING_ZNS 00:10:30.097 #undef SPDK_CONFIG_USDT 00:10:30.097 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:30.097 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:30.097 #define SPDK_CONFIG_VFIO_USER 1 00:10:30.097 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:30.097 #define SPDK_CONFIG_VHOST 1 00:10:30.097 #define SPDK_CONFIG_VIRTIO 1 00:10:30.097 #undef SPDK_CONFIG_VTUNE 00:10:30.097 #define SPDK_CONFIG_VTUNE_DIR 00:10:30.097 #define SPDK_CONFIG_WERROR 1 00:10:30.097 #define SPDK_CONFIG_WPDK_DIR 00:10:30.097 #undef SPDK_CONFIG_XNVME 00:10:30.097 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:30.097 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:30.098 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:10:30.099 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1395218 ]] 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1395218 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.VeJWw4 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.VeJWw4/tests/target /tmp/spdk.VeJWw4 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953643008 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330786816 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=54026670080 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994713088 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=7968043008 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30935175168 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376535040 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22409216 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996320256 00:10:30.100 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=1036288 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:10:30.101 * Looking for test storage... 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=54026670080 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=10182635520 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:30.101 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:30.102 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:32.013 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:32.013 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.013 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:32.013 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:32.014 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:32.014 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:32.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:10:32.275 00:10:32.275 --- 10.0.0.2 ping statistics --- 00:10:32.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.275 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:10:32.275 00:10:32.275 --- 10.0.0.1 ping statistics --- 00:10:32.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.275 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.275 ************************************ 00:10:32.275 START TEST nvmf_filesystem_no_in_capsule 00:10:32.275 ************************************ 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1396838 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1396838 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1396838 ']' 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.275 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.276 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.276 [2024-07-26 18:10:58.289603] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:32.276 [2024-07-26 18:10:58.289706] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.276 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.276 [2024-07-26 18:10:58.329526] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:32.276 [2024-07-26 18:10:58.356025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.535 [2024-07-26 18:10:58.447489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.535 [2024-07-26 18:10:58.447548] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.535 [2024-07-26 18:10:58.447577] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.535 [2024-07-26 18:10:58.447588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.535 [2024-07-26 18:10:58.447598] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.535 [2024-07-26 18:10:58.447647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.535 [2024-07-26 18:10:58.447708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.535 [2024-07-26 18:10:58.447772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.535 [2024-07-26 18:10:58.447774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.535 [2024-07-26 18:10:58.600297] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.535 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.794 Malloc1 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.794 [2024-07-26 18:10:58.769981] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:32.794 { 00:10:32.794 "name": "Malloc1", 00:10:32.794 "aliases": [ 00:10:32.794 "011c0ee5-4145-47f4-a010-80c9cb8311b7" 00:10:32.794 ], 00:10:32.794 "product_name": "Malloc disk", 00:10:32.794 "block_size": 512, 00:10:32.794 "num_blocks": 1048576, 00:10:32.794 "uuid": "011c0ee5-4145-47f4-a010-80c9cb8311b7", 00:10:32.794 "assigned_rate_limits": { 00:10:32.794 "rw_ios_per_sec": 0, 00:10:32.794 "rw_mbytes_per_sec": 0, 00:10:32.794 "r_mbytes_per_sec": 0, 00:10:32.794 "w_mbytes_per_sec": 0 00:10:32.794 }, 00:10:32.794 "claimed": true, 00:10:32.794 "claim_type": "exclusive_write", 00:10:32.794 "zoned": false, 00:10:32.794 "supported_io_types": { 00:10:32.794 "read": true, 00:10:32.794 "write": true, 00:10:32.794 "unmap": true, 00:10:32.794 "flush": true, 00:10:32.794 "reset": true, 00:10:32.794 "nvme_admin": false, 00:10:32.794 "nvme_io": false, 00:10:32.794 "nvme_io_md": false, 00:10:32.794 "write_zeroes": true, 00:10:32.794 "zcopy": true, 00:10:32.794 "get_zone_info": false, 00:10:32.794 "zone_management": false, 00:10:32.794 "zone_append": false, 00:10:32.794 "compare": false, 00:10:32.794 "compare_and_write": false, 00:10:32.794 "abort": true, 00:10:32.794 "seek_hole": false, 00:10:32.794 "seek_data": false, 00:10:32.794 "copy": true, 00:10:32.794 "nvme_iov_md": false 00:10:32.794 }, 00:10:32.794 "memory_domains": [ 00:10:32.794 { 00:10:32.794 "dma_device_id": "system", 00:10:32.794 "dma_device_type": 1 00:10:32.794 }, 00:10:32.794 { 00:10:32.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.794 "dma_device_type": 2 00:10:32.794 } 00:10:32.794 ], 00:10:32.794 "driver_specific": {} 00:10:32.794 } 00:10:32.794 ]' 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:32.794 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:32.795 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:32.795 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:32.795 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:32.795 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:32.795 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:32.795 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:33.734 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:33.734 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:33.734 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:33.734 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:33.735 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:35.642 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:36.579 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.518 ************************************ 00:10:37.518 START TEST filesystem_ext4 00:10:37.518 ************************************ 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:37.518 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:37.518 mke2fs 1.46.5 (30-Dec-2021) 00:10:37.518 Discarding device blocks: 0/522240 done 00:10:37.518 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:37.518 Filesystem UUID: af0e248f-b502-4826-ac37-a807db4bcc1d 00:10:37.518 Superblock backups stored on blocks: 00:10:37.518 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:37.518 00:10:37.518 Allocating group tables: 0/64 done 00:10:37.518 Writing inode tables: 0/64 done 00:10:37.518 Creating journal (8192 blocks): done 00:10:37.777 Writing superblocks and filesystem accounting information: 0/64 done 00:10:37.777 00:10:37.777 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:37.777 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:37.777 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.777 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:37.777 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.777 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:37.777 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:37.777 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.777 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1396838 00:10:37.777 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.777 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:38.037 00:10:38.037 real 0m0.502s 00:10:38.037 user 0m0.018s 00:10:38.037 sys 0m0.050s 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:38.037 ************************************ 00:10:38.037 END TEST filesystem_ext4 00:10:38.037 ************************************ 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.037 ************************************ 00:10:38.037 START TEST filesystem_btrfs 00:10:38.037 ************************************ 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:38.037 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:38.038 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:38.038 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:38.297 btrfs-progs v6.6.2 00:10:38.297 See https://btrfs.readthedocs.io for more information. 00:10:38.297 00:10:38.297 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:38.297 NOTE: several default settings have changed in version 5.15, please make sure 00:10:38.297 this does not affect your deployments: 00:10:38.297 - DUP for metadata (-m dup) 00:10:38.297 - enabled no-holes (-O no-holes) 00:10:38.297 - enabled free-space-tree (-R free-space-tree) 00:10:38.297 00:10:38.297 Label: (null) 00:10:38.297 UUID: 20b35034-e9ec-4092-8169-f3a4c6cada98 00:10:38.297 Node size: 16384 00:10:38.297 Sector size: 4096 00:10:38.297 Filesystem size: 510.00MiB 00:10:38.297 Block group profiles: 00:10:38.297 Data: single 8.00MiB 00:10:38.297 Metadata: DUP 32.00MiB 00:10:38.297 System: DUP 8.00MiB 00:10:38.297 SSD detected: yes 00:10:38.297 Zoned device: no 00:10:38.297 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:38.297 Runtime features: free-space-tree 00:10:38.297 Checksum: crc32c 00:10:38.297 Number of devices: 1 00:10:38.297 Devices: 00:10:38.297 ID SIZE PATH 00:10:38.297 1 510.00MiB /dev/nvme0n1p1 00:10:38.298 00:10:38.298 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:38.298 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:39.239 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:39.239 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:39.239 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:39.239 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:39.239 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:39.239 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:39.239 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1396838 00:10:39.239 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:39.239 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:39.240 00:10:39.240 real 0m1.125s 00:10:39.240 user 0m0.014s 00:10:39.240 sys 0m0.125s 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:39.240 ************************************ 00:10:39.240 END TEST filesystem_btrfs 00:10:39.240 ************************************ 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.240 ************************************ 00:10:39.240 START TEST filesystem_xfs 00:10:39.240 ************************************ 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:39.240 18:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:39.240 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:39.240 = sectsz=512 attr=2, projid32bit=1 00:10:39.240 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:39.240 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:39.240 data = bsize=4096 blocks=130560, imaxpct=25 00:10:39.240 = sunit=0 swidth=0 blks 00:10:39.240 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:39.240 log =internal log bsize=4096 blocks=16384, version=2 00:10:39.240 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:39.240 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:40.181 Discarding blocks...Done. 00:10:40.181 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:40.181 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1396838 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.718 00:10:42.718 real 0m3.642s 00:10:42.718 user 0m0.024s 00:10:42.718 sys 0m0.053s 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:42.718 ************************************ 00:10:42.718 END TEST filesystem_xfs 00:10:42.718 ************************************ 00:10:42.718 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:42.977 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:42.977 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:43.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1396838 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1396838 ']' 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1396838 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1396838 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1396838' 00:10:43.237 killing process with pid 1396838 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1396838 00:10:43.237 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1396838 00:10:43.523 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:43.523 00:10:43.523 real 0m11.380s 00:10:43.523 user 0m43.588s 00:10:43.523 sys 0m1.739s 00:10:43.524 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.524 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.524 ************************************ 00:10:43.524 END TEST nvmf_filesystem_no_in_capsule 00:10:43.524 ************************************ 00:10:43.524 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:43.524 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:43.524 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.524 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:43.785 ************************************ 00:10:43.785 START TEST nvmf_filesystem_in_capsule 00:10:43.785 ************************************ 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1398394 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1398394 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1398394 ']' 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.785 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.785 [2024-07-26 18:11:09.726926] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:43.785 [2024-07-26 18:11:09.727022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.785 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.785 [2024-07-26 18:11:09.770118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:43.785 [2024-07-26 18:11:09.802739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.785 [2024-07-26 18:11:09.899786] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.785 [2024-07-26 18:11:09.899850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.785 [2024-07-26 18:11:09.899866] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.785 [2024-07-26 18:11:09.899880] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.785 [2024-07-26 18:11:09.899891] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.785 [2024-07-26 18:11:09.899955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.785 [2024-07-26 18:11:09.900012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.785 [2024-07-26 18:11:09.900128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.785 [2024-07-26 18:11:09.900131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.045 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.045 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:44.045 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:44.045 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:44.045 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.045 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.046 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:44.046 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:44.046 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.046 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.046 [2024-07-26 18:11:10.052432] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.046 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.046 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:44.046 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.046 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.304 Malloc1 00:10:44.304 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.304 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.304 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.304 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.304 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.304 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:44.304 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.304 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.304 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.305 [2024-07-26 18:11:10.236271] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:44.305 { 00:10:44.305 "name": "Malloc1", 00:10:44.305 "aliases": [ 00:10:44.305 "a4870ade-38bf-4d2e-a442-3ea3e7107f1e" 00:10:44.305 ], 00:10:44.305 "product_name": "Malloc disk", 00:10:44.305 "block_size": 512, 00:10:44.305 "num_blocks": 1048576, 00:10:44.305 "uuid": "a4870ade-38bf-4d2e-a442-3ea3e7107f1e", 00:10:44.305 "assigned_rate_limits": { 00:10:44.305 "rw_ios_per_sec": 0, 00:10:44.305 "rw_mbytes_per_sec": 0, 00:10:44.305 "r_mbytes_per_sec": 0, 00:10:44.305 "w_mbytes_per_sec": 0 00:10:44.305 }, 00:10:44.305 "claimed": true, 00:10:44.305 "claim_type": "exclusive_write", 00:10:44.305 "zoned": false, 00:10:44.305 "supported_io_types": { 00:10:44.305 "read": true, 00:10:44.305 "write": true, 00:10:44.305 "unmap": true, 00:10:44.305 "flush": true, 00:10:44.305 "reset": true, 00:10:44.305 "nvme_admin": false, 00:10:44.305 "nvme_io": false, 00:10:44.305 "nvme_io_md": false, 00:10:44.305 "write_zeroes": true, 00:10:44.305 "zcopy": true, 00:10:44.305 "get_zone_info": false, 00:10:44.305 "zone_management": false, 00:10:44.305 "zone_append": false, 00:10:44.305 "compare": false, 00:10:44.305 "compare_and_write": false, 00:10:44.305 "abort": true, 00:10:44.305 "seek_hole": false, 00:10:44.305 "seek_data": false, 00:10:44.305 "copy": true, 00:10:44.305 "nvme_iov_md": false 00:10:44.305 }, 00:10:44.305 "memory_domains": [ 00:10:44.305 { 00:10:44.305 "dma_device_id": "system", 00:10:44.305 "dma_device_type": 1 00:10:44.305 }, 00:10:44.305 { 00:10:44.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.305 "dma_device_type": 2 00:10:44.305 } 00:10:44.305 ], 00:10:44.305 "driver_specific": {} 00:10:44.305 } 00:10:44.305 ]' 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:44.305 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:44.874 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:44.874 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:44.874 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.874 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:44.874 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:47.407 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:47.407 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:47.407 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.407 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:47.408 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:47.666 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.049 ************************************ 00:10:49.049 START TEST filesystem_in_capsule_ext4 00:10:49.049 ************************************ 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:49.049 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:49.049 mke2fs 1.46.5 (30-Dec-2021) 00:10:49.049 Discarding device blocks: 0/522240 done 00:10:49.049 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:49.049 Filesystem UUID: f81c7345-7ecc-45ff-8a36-5a7856d4a800 00:10:49.049 Superblock backups stored on blocks: 00:10:49.049 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:49.049 00:10:49.049 Allocating group tables: 0/64 done 00:10:49.049 Writing inode tables: 0/64 done 00:10:49.049 Creating journal (8192 blocks): done 00:10:50.248 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:10:50.248 00:10:50.248 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:50.248 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:50.818 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:51.076 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:51.076 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:51.076 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:51.076 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:51.076 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1398394 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:51.076 00:10:51.076 real 0m2.189s 00:10:51.076 user 0m0.022s 00:10:51.076 sys 0m0.047s 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:51.076 ************************************ 00:10:51.076 END TEST filesystem_in_capsule_ext4 00:10:51.076 ************************************ 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.076 ************************************ 00:10:51.076 START TEST filesystem_in_capsule_btrfs 00:10:51.076 ************************************ 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:51.076 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:51.336 btrfs-progs v6.6.2 00:10:51.336 See https://btrfs.readthedocs.io for more information. 00:10:51.336 00:10:51.336 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:51.336 NOTE: several default settings have changed in version 5.15, please make sure 00:10:51.336 this does not affect your deployments: 00:10:51.336 - DUP for metadata (-m dup) 00:10:51.336 - enabled no-holes (-O no-holes) 00:10:51.336 - enabled free-space-tree (-R free-space-tree) 00:10:51.336 00:10:51.336 Label: (null) 00:10:51.336 UUID: f534d45a-4c84-4265-afd0-92597abad335 00:10:51.336 Node size: 16384 00:10:51.336 Sector size: 4096 00:10:51.336 Filesystem size: 510.00MiB 00:10:51.336 Block group profiles: 00:10:51.336 Data: single 8.00MiB 00:10:51.336 Metadata: DUP 32.00MiB 00:10:51.336 System: DUP 8.00MiB 00:10:51.336 SSD detected: yes 00:10:51.336 Zoned device: no 00:10:51.336 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:51.336 Runtime features: free-space-tree 00:10:51.336 Checksum: crc32c 00:10:51.336 Number of devices: 1 00:10:51.336 Devices: 00:10:51.336 ID SIZE PATH 00:10:51.336 1 510.00MiB /dev/nvme0n1p1 00:10:51.336 00:10:51.336 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:51.336 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.274 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.274 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:52.274 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.274 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:52.274 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:52.274 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.534 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1398394 00:10:52.534 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.534 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.534 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.534 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.534 00:10:52.534 real 0m1.385s 00:10:52.534 user 0m0.013s 00:10:52.534 sys 0m0.112s 00:10:52.534 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.534 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:52.534 ************************************ 00:10:52.534 END TEST filesystem_in_capsule_btrfs 00:10:52.534 ************************************ 00:10:52.534 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 ************************************ 00:10:52.535 START TEST filesystem_in_capsule_xfs 00:10:52.535 ************************************ 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:52.535 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:52.535 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:52.535 = sectsz=512 attr=2, projid32bit=1 00:10:52.535 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:52.535 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:52.535 data = bsize=4096 blocks=130560, imaxpct=25 00:10:52.535 = sunit=0 swidth=0 blks 00:10:52.535 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:52.535 log =internal log bsize=4096 blocks=16384, version=2 00:10:52.535 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:52.535 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:53.472 Discarding blocks...Done. 00:10:53.472 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:53.472 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1398394 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:56.012 00:10:56.012 real 0m3.499s 00:10:56.012 user 0m0.024s 00:10:56.012 sys 0m0.054s 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:56.012 ************************************ 00:10:56.012 END TEST filesystem_in_capsule_xfs 00:10:56.012 ************************************ 00:10:56.012 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1398394 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1398394 ']' 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1398394 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1398394 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1398394' 00:10:56.272 killing process with pid 1398394 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1398394 00:10:56.272 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1398394 00:10:56.840 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:56.840 00:10:56.840 real 0m13.147s 00:10:56.840 user 0m50.511s 00:10:56.840 sys 0m1.855s 00:10:56.840 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.840 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.841 ************************************ 00:10:56.841 END TEST nvmf_filesystem_in_capsule 00:10:56.841 ************************************ 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.841 rmmod nvme_tcp 00:10:56.841 rmmod nvme_fabrics 00:10:56.841 rmmod nvme_keyring 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.841 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.381 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:59.381 00:10:59.381 real 0m28.901s 00:10:59.381 user 1m34.938s 00:10:59.381 sys 0m5.141s 00:10:59.381 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.381 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.381 ************************************ 00:10:59.381 END TEST nvmf_filesystem 00:10:59.381 ************************************ 00:10:59.381 18:11:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:59.381 18:11:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.381 18:11:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.381 18:11:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.381 ************************************ 00:10:59.382 START TEST nvmf_target_discovery 00:10:59.382 ************************************ 00:10:59.382 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:59.382 * Looking for test storage... 00:10:59.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.382 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:01.288 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:01.288 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:01.288 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:01.288 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.288 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:01.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:11:01.289 00:11:01.289 --- 10.0.0.2 ping statistics --- 00:11:01.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.289 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:11:01.289 00:11:01.289 --- 10.0.0.1 ping statistics --- 00:11:01.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.289 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1402055 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1402055 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1402055 ']' 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.289 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.289 [2024-07-26 18:11:27.304465] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:11:01.289 [2024-07-26 18:11:27.304550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.289 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.289 [2024-07-26 18:11:27.341463] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:01.289 [2024-07-26 18:11:27.375523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.548 [2024-07-26 18:11:27.471141] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.548 [2024-07-26 18:11:27.471206] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.548 [2024-07-26 18:11:27.471224] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.548 [2024-07-26 18:11:27.471237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.548 [2024-07-26 18:11:27.471250] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.548 [2024-07-26 18:11:27.471333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.548 [2024-07-26 18:11:27.471387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.548 [2024-07-26 18:11:27.471419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.548 [2024-07-26 18:11:27.471421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.548 [2024-07-26 18:11:27.631746] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.548 Null1 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.548 [2024-07-26 18:11:27.672074] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.548 Null2 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.548 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.810 Null3 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.810 Null4 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.810 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:02.105 00:11:02.105 Discovery Log Number of Records 6, Generation counter 6 00:11:02.105 =====Discovery Log Entry 0====== 00:11:02.105 trtype: tcp 00:11:02.105 adrfam: ipv4 00:11:02.105 subtype: current discovery subsystem 00:11:02.105 treq: not required 00:11:02.105 portid: 0 00:11:02.105 trsvcid: 4420 00:11:02.105 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:02.105 traddr: 10.0.0.2 00:11:02.105 eflags: explicit discovery connections, duplicate discovery information 00:11:02.105 sectype: none 00:11:02.105 =====Discovery Log Entry 1====== 00:11:02.105 trtype: tcp 00:11:02.105 adrfam: ipv4 00:11:02.105 subtype: nvme subsystem 00:11:02.105 treq: not required 00:11:02.105 portid: 0 00:11:02.105 trsvcid: 4420 00:11:02.105 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:02.105 traddr: 10.0.0.2 00:11:02.105 eflags: none 00:11:02.105 sectype: none 00:11:02.105 =====Discovery Log Entry 2====== 00:11:02.105 trtype: tcp 00:11:02.105 adrfam: ipv4 00:11:02.105 subtype: nvme subsystem 00:11:02.105 treq: not required 00:11:02.105 portid: 0 00:11:02.105 trsvcid: 4420 00:11:02.105 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:02.105 traddr: 10.0.0.2 00:11:02.105 eflags: none 00:11:02.105 sectype: none 00:11:02.105 =====Discovery Log Entry 3====== 00:11:02.105 trtype: tcp 00:11:02.105 adrfam: ipv4 00:11:02.105 subtype: nvme subsystem 00:11:02.105 treq: not required 00:11:02.105 portid: 0 00:11:02.105 trsvcid: 4420 00:11:02.105 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:02.105 traddr: 10.0.0.2 00:11:02.105 eflags: none 00:11:02.105 sectype: none 00:11:02.105 =====Discovery Log Entry 4====== 00:11:02.105 trtype: tcp 00:11:02.105 adrfam: ipv4 00:11:02.105 subtype: nvme subsystem 00:11:02.105 treq: not required 00:11:02.105 portid: 0 00:11:02.105 trsvcid: 4420 00:11:02.105 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:02.105 traddr: 10.0.0.2 00:11:02.105 eflags: none 00:11:02.105 sectype: none 00:11:02.105 =====Discovery Log Entry 5====== 00:11:02.105 trtype: tcp 00:11:02.105 adrfam: ipv4 00:11:02.105 subtype: discovery subsystem referral 00:11:02.105 treq: not required 00:11:02.105 portid: 0 00:11:02.105 trsvcid: 4430 00:11:02.105 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:02.105 traddr: 10.0.0.2 00:11:02.105 eflags: none 00:11:02.105 sectype: none 00:11:02.105 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:02.105 Perform nvmf subsystem discovery via RPC 00:11:02.105 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:02.105 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.105 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.105 [ 00:11:02.105 { 00:11:02.105 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:02.105 "subtype": "Discovery", 00:11:02.105 "listen_addresses": [ 00:11:02.105 { 00:11:02.105 "trtype": "TCP", 00:11:02.105 "adrfam": "IPv4", 00:11:02.105 "traddr": "10.0.0.2", 00:11:02.105 "trsvcid": "4420" 00:11:02.105 } 00:11:02.105 ], 00:11:02.105 "allow_any_host": true, 00:11:02.105 "hosts": [] 00:11:02.105 }, 00:11:02.105 { 00:11:02.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:02.105 "subtype": "NVMe", 00:11:02.105 "listen_addresses": [ 00:11:02.105 { 00:11:02.105 "trtype": "TCP", 00:11:02.105 "adrfam": "IPv4", 00:11:02.105 "traddr": "10.0.0.2", 00:11:02.105 "trsvcid": "4420" 00:11:02.105 } 00:11:02.105 ], 00:11:02.105 "allow_any_host": true, 00:11:02.105 "hosts": [], 00:11:02.105 "serial_number": "SPDK00000000000001", 00:11:02.105 "model_number": "SPDK bdev Controller", 00:11:02.105 "max_namespaces": 32, 00:11:02.105 "min_cntlid": 1, 00:11:02.105 "max_cntlid": 65519, 00:11:02.105 "namespaces": [ 00:11:02.105 { 00:11:02.105 "nsid": 1, 00:11:02.105 "bdev_name": "Null1", 00:11:02.105 "name": "Null1", 00:11:02.105 "nguid": "DE1E5EB18FA0414583598B75AE24E27A", 00:11:02.105 "uuid": "de1e5eb1-8fa0-4145-8359-8b75ae24e27a" 00:11:02.105 } 00:11:02.105 ] 00:11:02.105 }, 00:11:02.105 { 00:11:02.105 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:02.105 "subtype": "NVMe", 00:11:02.105 "listen_addresses": [ 00:11:02.105 { 00:11:02.105 "trtype": "TCP", 00:11:02.105 "adrfam": "IPv4", 00:11:02.105 "traddr": "10.0.0.2", 00:11:02.105 "trsvcid": "4420" 00:11:02.105 } 00:11:02.105 ], 00:11:02.105 "allow_any_host": true, 00:11:02.105 "hosts": [], 00:11:02.105 "serial_number": "SPDK00000000000002", 00:11:02.105 "model_number": "SPDK bdev Controller", 00:11:02.105 "max_namespaces": 32, 00:11:02.105 "min_cntlid": 1, 00:11:02.105 "max_cntlid": 65519, 00:11:02.105 "namespaces": [ 00:11:02.105 { 00:11:02.105 "nsid": 1, 00:11:02.106 "bdev_name": "Null2", 00:11:02.106 "name": "Null2", 00:11:02.106 "nguid": "AF51D96E52A841AB9A6742339DF51738", 00:11:02.106 "uuid": "af51d96e-52a8-41ab-9a67-42339df51738" 00:11:02.106 } 00:11:02.106 ] 00:11:02.106 }, 00:11:02.106 { 00:11:02.106 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:02.106 "subtype": "NVMe", 00:11:02.106 "listen_addresses": [ 00:11:02.106 { 00:11:02.106 "trtype": "TCP", 00:11:02.106 "adrfam": "IPv4", 00:11:02.106 "traddr": "10.0.0.2", 00:11:02.106 "trsvcid": "4420" 00:11:02.106 } 00:11:02.106 ], 00:11:02.106 "allow_any_host": true, 00:11:02.106 "hosts": [], 00:11:02.106 "serial_number": "SPDK00000000000003", 00:11:02.106 "model_number": "SPDK bdev Controller", 00:11:02.106 "max_namespaces": 32, 00:11:02.106 "min_cntlid": 1, 00:11:02.106 "max_cntlid": 65519, 00:11:02.106 "namespaces": [ 00:11:02.106 { 00:11:02.106 "nsid": 1, 00:11:02.106 "bdev_name": "Null3", 00:11:02.106 "name": "Null3", 00:11:02.106 "nguid": "56A75EA677BC45B6B80EFF715C98B500", 00:11:02.106 "uuid": "56a75ea6-77bc-45b6-b80e-ff715c98b500" 00:11:02.106 } 00:11:02.106 ] 00:11:02.106 }, 00:11:02.106 { 00:11:02.106 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:02.106 "subtype": "NVMe", 00:11:02.106 "listen_addresses": [ 00:11:02.106 { 00:11:02.106 "trtype": "TCP", 00:11:02.106 "adrfam": "IPv4", 00:11:02.106 "traddr": "10.0.0.2", 00:11:02.106 "trsvcid": "4420" 00:11:02.106 } 00:11:02.106 ], 00:11:02.106 "allow_any_host": true, 00:11:02.106 "hosts": [], 00:11:02.106 "serial_number": "SPDK00000000000004", 00:11:02.106 "model_number": "SPDK bdev Controller", 00:11:02.106 "max_namespaces": 32, 00:11:02.106 "min_cntlid": 1, 00:11:02.106 "max_cntlid": 65519, 00:11:02.106 "namespaces": [ 00:11:02.106 { 00:11:02.106 "nsid": 1, 00:11:02.106 "bdev_name": "Null4", 00:11:02.106 "name": "Null4", 00:11:02.106 "nguid": "CBF61ED91CBD4EE0BE0D48DC56DCD146", 00:11:02.106 "uuid": "cbf61ed9-1cbd-4ee0-be0d-48dc56dcd146" 00:11:02.106 } 00:11:02.106 ] 00:11:02.106 } 00:11:02.106 ] 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:02.106 rmmod nvme_tcp 00:11:02.106 rmmod nvme_fabrics 00:11:02.106 rmmod nvme_keyring 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1402055 ']' 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1402055 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1402055 ']' 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1402055 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1402055 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1402055' 00:11:02.106 killing process with pid 1402055 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1402055 00:11:02.106 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1402055 00:11:02.366 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:02.366 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:02.366 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:02.366 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:02.366 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:02.366 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.366 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.366 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.903 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:04.903 00:11:04.903 real 0m5.510s 00:11:04.904 user 0m4.625s 00:11:04.904 sys 0m1.892s 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:04.904 ************************************ 00:11:04.904 END TEST nvmf_target_discovery 00:11:04.904 ************************************ 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:04.904 ************************************ 00:11:04.904 START TEST nvmf_referrals 00:11:04.904 ************************************ 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:04.904 * Looking for test storage... 00:11:04.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:04.904 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.811 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:06.812 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:06.812 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:06.812 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:06.812 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:06.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:11:06.812 00:11:06.812 --- 10.0.0.2 ping statistics --- 00:11:06.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.812 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:11:06.812 00:11:06.812 --- 10.0.0.1 ping statistics --- 00:11:06.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.812 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:06.812 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1404087 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1404087 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1404087 ']' 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:06.813 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.813 [2024-07-26 18:11:32.713468] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:11:06.813 [2024-07-26 18:11:32.713550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.813 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.813 [2024-07-26 18:11:32.751732] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:06.813 [2024-07-26 18:11:32.783722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.813 [2024-07-26 18:11:32.877680] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.813 [2024-07-26 18:11:32.877739] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.813 [2024-07-26 18:11:32.877756] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.813 [2024-07-26 18:11:32.877770] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.813 [2024-07-26 18:11:32.877782] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.813 [2024-07-26 18:11:32.878152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.813 [2024-07-26 18:11:32.878182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.813 [2024-07-26 18:11:32.878212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.813 [2024-07-26 18:11:32.878215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.071 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:07.071 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:07.071 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.072 [2024-07-26 18:11:33.037598] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.072 [2024-07-26 18:11:33.049843] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.072 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.329 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:07.588 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:07.589 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:07.589 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.589 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.589 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.589 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.589 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.847 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:08.106 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.106 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:08.106 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:08.106 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:08.106 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:08.106 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:08.106 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:08.106 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:08.106 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.106 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:08.106 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:08.106 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:08.106 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:08.106 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:08.106 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.106 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.364 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.364 rmmod nvme_tcp 00:11:08.622 rmmod nvme_fabrics 00:11:08.623 rmmod nvme_keyring 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1404087 ']' 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1404087 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1404087 ']' 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1404087 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1404087 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1404087' 00:11:08.623 killing process with pid 1404087 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1404087 00:11:08.623 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1404087 00:11:08.883 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.883 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:08.883 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:08.883 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.883 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.883 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.883 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.883 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.790 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:10.790 00:11:10.790 real 0m6.313s 00:11:10.790 user 0m8.921s 00:11:10.790 sys 0m2.083s 00:11:10.790 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.790 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:10.790 ************************************ 00:11:10.790 END TEST nvmf_referrals 00:11:10.790 ************************************ 00:11:10.790 18:11:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:10.790 18:11:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:10.790 18:11:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.791 18:11:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:10.791 ************************************ 00:11:10.791 START TEST nvmf_connect_disconnect 00:11:10.791 ************************************ 00:11:10.791 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:11.049 * Looking for test storage... 00:11:11.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.049 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:11.050 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:12.952 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.952 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:12.953 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:12.953 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:12.953 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.953 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:13.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:11:13.212 00:11:13.212 --- 10.0.0.2 ping statistics --- 00:11:13.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.212 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:11:13.212 00:11:13.212 --- 10.0.0.1 ping statistics --- 00:11:13.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.212 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.212 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1406368 00:11:13.213 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.213 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1406368 00:11:13.213 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1406368 ']' 00:11:13.213 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.213 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:13.213 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.213 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:13.213 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.213 [2024-07-26 18:11:39.241704] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:11:13.213 [2024-07-26 18:11:39.241795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.213 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.213 [2024-07-26 18:11:39.281779] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:13.213 [2024-07-26 18:11:39.308448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.472 [2024-07-26 18:11:39.397527] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.472 [2024-07-26 18:11:39.397585] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.472 [2024-07-26 18:11:39.397614] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.472 [2024-07-26 18:11:39.397625] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.472 [2024-07-26 18:11:39.397635] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.472 [2024-07-26 18:11:39.397684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.472 [2024-07-26 18:11:39.397744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.472 [2024-07-26 18:11:39.397809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.472 [2024-07-26 18:11:39.397811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.472 [2024-07-26 18:11:39.550527] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.472 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.473 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.473 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.473 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.473 [2024-07-26 18:11:39.612142] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.473 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.473 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:13.473 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:13.473 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:13.473 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:16.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:04.891 rmmod nvme_tcp 00:15:04.891 rmmod nvme_fabrics 00:15:04.891 rmmod nvme_keyring 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1406368 ']' 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1406368 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1406368 ']' 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1406368 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1406368 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1406368' 00:15:04.891 killing process with pid 1406368 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1406368 00:15:04.891 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1406368 00:15:05.151 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:05.151 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:05.151 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:05.151 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.151 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.151 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.152 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.152 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.059 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:07.059 00:15:07.059 real 3m56.246s 00:15:07.059 user 14m59.185s 00:15:07.059 sys 0m34.943s 00:15:07.059 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.059 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:07.059 ************************************ 00:15:07.059 END TEST nvmf_connect_disconnect 00:15:07.059 ************************************ 00:15:07.060 18:15:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:07.060 18:15:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:07.060 18:15:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.060 18:15:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:07.060 ************************************ 00:15:07.060 START TEST nvmf_multitarget 00:15:07.060 ************************************ 00:15:07.060 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:07.316 * Looking for test storage... 00:15:07.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:15:07.317 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:09.223 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:09.223 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:09.223 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:09.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.223 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:09.224 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:09.224 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.224 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:09.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:15:09.519 00:15:09.519 --- 10.0.0.2 ping statistics --- 00:15:09.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.519 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:15:09.519 00:15:09.519 --- 10.0.0.1 ping statistics --- 00:15:09.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.519 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1438101 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1438101 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1438101 ']' 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.519 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:09.519 [2024-07-26 18:15:35.538575] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:09.519 [2024-07-26 18:15:35.538662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.519 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.519 [2024-07-26 18:15:35.576961] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:09.519 [2024-07-26 18:15:35.603660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.778 [2024-07-26 18:15:35.689894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.778 [2024-07-26 18:15:35.689952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.778 [2024-07-26 18:15:35.689981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.778 [2024-07-26 18:15:35.689992] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.778 [2024-07-26 18:15:35.690002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.778 [2024-07-26 18:15:35.690087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.778 [2024-07-26 18:15:35.690158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.778 [2024-07-26 18:15:35.690219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.778 [2024-07-26 18:15:35.690221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.778 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.778 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:15:09.778 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:09.778 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:09.778 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:09.778 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.778 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:09.778 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:09.778 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:10.035 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:10.036 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:10.036 "nvmf_tgt_1" 00:15:10.036 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:10.036 "nvmf_tgt_2" 00:15:10.036 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:10.036 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:10.294 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:10.294 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:10.294 true 00:15:10.294 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:10.552 true 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:10.552 rmmod nvme_tcp 00:15:10.552 rmmod nvme_fabrics 00:15:10.552 rmmod nvme_keyring 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1438101 ']' 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1438101 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1438101 ']' 00:15:10.552 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1438101 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1438101 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1438101' 00:15:10.811 killing process with pid 1438101 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1438101 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1438101 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.811 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.347 18:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:13.347 00:15:13.347 real 0m5.784s 00:15:13.347 user 0m6.460s 00:15:13.347 sys 0m1.967s 00:15:13.347 18:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:13.347 18:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:13.347 ************************************ 00:15:13.347 END TEST nvmf_multitarget 00:15:13.347 ************************************ 00:15:13.347 18:15:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:13.347 18:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:13.347 18:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:13.347 18:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:13.347 ************************************ 00:15:13.347 START TEST nvmf_rpc 00:15:13.347 ************************************ 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:13.348 * Looking for test storage... 00:15:13.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:15:13.348 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:15.253 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:15.253 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:15.253 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:15.253 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.253 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:15.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:15:15.254 00:15:15.254 --- 10.0.0.2 ping statistics --- 00:15:15.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.254 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:15:15.254 00:15:15.254 --- 10.0.0.1 ping statistics --- 00:15:15.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.254 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1440200 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1440200 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1440200 ']' 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.254 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.254 [2024-07-26 18:15:41.385616] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:15.254 [2024-07-26 18:15:41.385696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.512 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.512 [2024-07-26 18:15:41.423992] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:15.512 [2024-07-26 18:15:41.456470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.512 [2024-07-26 18:15:41.550128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.512 [2024-07-26 18:15:41.550193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.512 [2024-07-26 18:15:41.550211] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.512 [2024-07-26 18:15:41.550224] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.512 [2024-07-26 18:15:41.550236] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.512 [2024-07-26 18:15:41.550293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.512 [2024-07-26 18:15:41.550331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.512 [2024-07-26 18:15:41.550380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.512 [2024-07-26 18:15:41.550382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.771 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.771 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:15.771 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:15.771 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:15.771 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.771 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.771 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:15.771 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.771 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.771 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.771 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:15.771 "tick_rate": 2700000000, 00:15:15.771 "poll_groups": [ 00:15:15.771 { 00:15:15.771 "name": "nvmf_tgt_poll_group_000", 00:15:15.771 "admin_qpairs": 0, 00:15:15.771 "io_qpairs": 0, 00:15:15.771 "current_admin_qpairs": 0, 00:15:15.771 "current_io_qpairs": 0, 00:15:15.771 "pending_bdev_io": 0, 00:15:15.771 "completed_nvme_io": 0, 00:15:15.771 "transports": [] 00:15:15.771 }, 00:15:15.771 { 00:15:15.771 "name": "nvmf_tgt_poll_group_001", 00:15:15.771 "admin_qpairs": 0, 00:15:15.771 "io_qpairs": 0, 00:15:15.771 "current_admin_qpairs": 0, 00:15:15.771 "current_io_qpairs": 0, 00:15:15.771 "pending_bdev_io": 0, 00:15:15.771 "completed_nvme_io": 0, 00:15:15.771 "transports": [] 00:15:15.771 }, 00:15:15.771 { 00:15:15.771 "name": "nvmf_tgt_poll_group_002", 00:15:15.771 "admin_qpairs": 0, 00:15:15.771 "io_qpairs": 0, 00:15:15.771 "current_admin_qpairs": 0, 00:15:15.771 "current_io_qpairs": 0, 00:15:15.771 "pending_bdev_io": 0, 00:15:15.771 "completed_nvme_io": 0, 00:15:15.771 "transports": [] 00:15:15.771 }, 00:15:15.771 { 00:15:15.772 "name": "nvmf_tgt_poll_group_003", 00:15:15.772 "admin_qpairs": 0, 00:15:15.772 "io_qpairs": 0, 00:15:15.772 "current_admin_qpairs": 0, 00:15:15.772 "current_io_qpairs": 0, 00:15:15.772 "pending_bdev_io": 0, 00:15:15.772 "completed_nvme_io": 0, 00:15:15.772 "transports": [] 00:15:15.772 } 00:15:15.772 ] 00:15:15.772 }' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.772 [2024-07-26 18:15:41.776741] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:15.772 "tick_rate": 2700000000, 00:15:15.772 "poll_groups": [ 00:15:15.772 { 00:15:15.772 "name": "nvmf_tgt_poll_group_000", 00:15:15.772 "admin_qpairs": 0, 00:15:15.772 "io_qpairs": 0, 00:15:15.772 "current_admin_qpairs": 0, 00:15:15.772 "current_io_qpairs": 0, 00:15:15.772 "pending_bdev_io": 0, 00:15:15.772 "completed_nvme_io": 0, 00:15:15.772 "transports": [ 00:15:15.772 { 00:15:15.772 "trtype": "TCP" 00:15:15.772 } 00:15:15.772 ] 00:15:15.772 }, 00:15:15.772 { 00:15:15.772 "name": "nvmf_tgt_poll_group_001", 00:15:15.772 "admin_qpairs": 0, 00:15:15.772 "io_qpairs": 0, 00:15:15.772 "current_admin_qpairs": 0, 00:15:15.772 "current_io_qpairs": 0, 00:15:15.772 "pending_bdev_io": 0, 00:15:15.772 "completed_nvme_io": 0, 00:15:15.772 "transports": [ 00:15:15.772 { 00:15:15.772 "trtype": "TCP" 00:15:15.772 } 00:15:15.772 ] 00:15:15.772 }, 00:15:15.772 { 00:15:15.772 "name": "nvmf_tgt_poll_group_002", 00:15:15.772 "admin_qpairs": 0, 00:15:15.772 "io_qpairs": 0, 00:15:15.772 "current_admin_qpairs": 0, 00:15:15.772 "current_io_qpairs": 0, 00:15:15.772 "pending_bdev_io": 0, 00:15:15.772 "completed_nvme_io": 0, 00:15:15.772 "transports": [ 00:15:15.772 { 00:15:15.772 "trtype": "TCP" 00:15:15.772 } 00:15:15.772 ] 00:15:15.772 }, 00:15:15.772 { 00:15:15.772 "name": "nvmf_tgt_poll_group_003", 00:15:15.772 "admin_qpairs": 0, 00:15:15.772 "io_qpairs": 0, 00:15:15.772 "current_admin_qpairs": 0, 00:15:15.772 "current_io_qpairs": 0, 00:15:15.772 "pending_bdev_io": 0, 00:15:15.772 "completed_nvme_io": 0, 00:15:15.772 "transports": [ 00:15:15.772 { 00:15:15.772 "trtype": "TCP" 00:15:15.772 } 00:15:15.772 ] 00:15:15.772 } 00:15:15.772 ] 00:15:15.772 }' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.772 Malloc1 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.772 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.031 [2024-07-26 18:15:41.917769] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:16.031 [2024-07-26 18:15:41.940168] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:16.031 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:16.031 could not add new controller: failed to write to nvme-fabrics device 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.031 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:16.600 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:16.600 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:16.600 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:16.600 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:16.600 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:19.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:19.132 [2024-07-26 18:15:44.769115] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:19.132 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:19.132 could not add new controller: failed to write to nvme-fabrics device 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.132 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:19.391 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:19.391 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:19.391 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:19.391 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:19.391 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:21.298 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:21.298 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:21.298 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:21.298 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:21.298 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:21.298 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:21.298 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:21.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.558 [2024-07-26 18:15:47.558109] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.558 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:22.129 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:22.129 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:22.129 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:22.129 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:22.129 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:24.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.663 [2024-07-26 18:15:50.293372] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.663 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:24.923 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:24.923 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:24.923 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:24.923 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:24.923 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:27.455 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:27.455 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:27.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.455 [2024-07-26 18:15:53.115872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.455 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:27.712 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:27.712 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:27.712 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:27.712 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:27.712 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:29.614 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:29.614 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:29.615 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.904 [2024-07-26 18:15:55.915997] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.904 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:30.487 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:30.487 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:30.487 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:30.487 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:30.487 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:32.392 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:32.392 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:32.392 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:32.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.652 [2024-07-26 18:15:58.687878] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.652 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:32.653 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.653 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.653 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.653 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:32.653 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.653 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.653 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.653 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:33.222 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:33.222 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:33.222 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:33.222 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:33.222 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:35.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.761 [2024-07-26 18:16:01.469295] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.761 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 [2024-07-26 18:16:01.517357] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 [2024-07-26 18:16:01.565533] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 [2024-07-26 18:16:01.613704] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.762 [2024-07-26 18:16:01.661859] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.762 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:35.763 "tick_rate": 2700000000, 00:15:35.763 "poll_groups": [ 00:15:35.763 { 00:15:35.763 "name": "nvmf_tgt_poll_group_000", 00:15:35.763 "admin_qpairs": 2, 00:15:35.763 "io_qpairs": 84, 00:15:35.763 "current_admin_qpairs": 0, 00:15:35.763 "current_io_qpairs": 0, 00:15:35.763 "pending_bdev_io": 0, 00:15:35.763 "completed_nvme_io": 114, 00:15:35.763 "transports": [ 00:15:35.763 { 00:15:35.763 "trtype": "TCP" 00:15:35.763 } 00:15:35.763 ] 00:15:35.763 }, 00:15:35.763 { 00:15:35.763 "name": "nvmf_tgt_poll_group_001", 00:15:35.763 "admin_qpairs": 2, 00:15:35.763 "io_qpairs": 84, 00:15:35.763 "current_admin_qpairs": 0, 00:15:35.763 "current_io_qpairs": 0, 00:15:35.763 "pending_bdev_io": 0, 00:15:35.763 "completed_nvme_io": 156, 00:15:35.763 "transports": [ 00:15:35.763 { 00:15:35.763 "trtype": "TCP" 00:15:35.763 } 00:15:35.763 ] 00:15:35.763 }, 00:15:35.763 { 00:15:35.763 "name": "nvmf_tgt_poll_group_002", 00:15:35.763 "admin_qpairs": 1, 00:15:35.763 "io_qpairs": 84, 00:15:35.763 "current_admin_qpairs": 0, 00:15:35.763 "current_io_qpairs": 0, 00:15:35.763 "pending_bdev_io": 0, 00:15:35.763 "completed_nvme_io": 234, 00:15:35.763 "transports": [ 00:15:35.763 { 00:15:35.763 "trtype": "TCP" 00:15:35.763 } 00:15:35.763 ] 00:15:35.763 }, 00:15:35.763 { 00:15:35.763 "name": "nvmf_tgt_poll_group_003", 00:15:35.763 "admin_qpairs": 2, 00:15:35.763 "io_qpairs": 84, 00:15:35.763 "current_admin_qpairs": 0, 00:15:35.763 "current_io_qpairs": 0, 00:15:35.763 "pending_bdev_io": 0, 00:15:35.763 "completed_nvme_io": 182, 00:15:35.763 "transports": [ 00:15:35.763 { 00:15:35.763 "trtype": "TCP" 00:15:35.763 } 00:15:35.763 ] 00:15:35.763 } 00:15:35.763 ] 00:15:35.763 }' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:35.763 rmmod nvme_tcp 00:15:35.763 rmmod nvme_fabrics 00:15:35.763 rmmod nvme_keyring 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1440200 ']' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1440200 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1440200 ']' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1440200 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1440200 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1440200' 00:15:35.763 killing process with pid 1440200 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1440200 00:15:35.763 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1440200 00:15:36.021 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:36.021 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:36.021 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:36.021 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.021 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:36.021 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.021 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.021 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:38.556 00:15:38.556 real 0m25.149s 00:15:38.556 user 1m21.426s 00:15:38.556 sys 0m4.074s 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.556 ************************************ 00:15:38.556 END TEST nvmf_rpc 00:15:38.556 ************************************ 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:38.556 ************************************ 00:15:38.556 START TEST nvmf_invalid 00:15:38.556 ************************************ 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:38.556 * Looking for test storage... 00:15:38.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.556 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:38.557 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:40.460 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:40.460 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:40.460 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.460 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:40.461 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:40.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:15:40.461 00:15:40.461 --- 10.0.0.2 ping statistics --- 00:15:40.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.461 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:40.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:15:40.461 00:15:40.461 --- 10.0.0.1 ping statistics --- 00:15:40.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.461 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1444682 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1444682 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1444682 ']' 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:40.461 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:40.461 [2024-07-26 18:16:06.502534] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:40.461 [2024-07-26 18:16:06.502632] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.461 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.461 [2024-07-26 18:16:06.542028] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:40.461 [2024-07-26 18:16:06.568514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.719 [2024-07-26 18:16:06.660452] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.719 [2024-07-26 18:16:06.660528] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.719 [2024-07-26 18:16:06.660542] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.719 [2024-07-26 18:16:06.660569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.719 [2024-07-26 18:16:06.660579] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.719 [2024-07-26 18:16:06.660671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.719 [2024-07-26 18:16:06.660735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.719 [2024-07-26 18:16:06.660782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.719 [2024-07-26 18:16:06.660784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.719 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:40.719 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:40.719 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.719 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:40.719 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:40.719 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.719 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:40.719 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10928 00:15:40.977 [2024-07-26 18:16:07.082382] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:40.977 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:40.977 { 00:15:40.977 "nqn": "nqn.2016-06.io.spdk:cnode10928", 00:15:40.977 "tgt_name": "foobar", 00:15:40.977 "method": "nvmf_create_subsystem", 00:15:40.977 "req_id": 1 00:15:40.977 } 00:15:40.977 Got JSON-RPC error response 00:15:40.977 response: 00:15:40.977 { 00:15:40.977 "code": -32603, 00:15:40.977 "message": "Unable to find target foobar" 00:15:40.977 }' 00:15:40.977 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:40.977 { 00:15:40.977 "nqn": "nqn.2016-06.io.spdk:cnode10928", 00:15:40.977 "tgt_name": "foobar", 00:15:40.977 "method": "nvmf_create_subsystem", 00:15:40.977 "req_id": 1 00:15:40.977 } 00:15:40.977 Got JSON-RPC error response 00:15:40.977 response: 00:15:40.977 { 00:15:40.977 "code": -32603, 00:15:40.977 "message": "Unable to find target foobar" 00:15:40.977 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:40.977 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:40.977 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11137 00:15:41.235 [2024-07-26 18:16:07.379450] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11137: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:41.493 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:41.493 { 00:15:41.493 "nqn": "nqn.2016-06.io.spdk:cnode11137", 00:15:41.493 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:41.493 "method": "nvmf_create_subsystem", 00:15:41.493 "req_id": 1 00:15:41.493 } 00:15:41.493 Got JSON-RPC error response 00:15:41.493 response: 00:15:41.493 { 00:15:41.493 "code": -32602, 00:15:41.493 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:41.493 }' 00:15:41.493 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:41.493 { 00:15:41.493 "nqn": "nqn.2016-06.io.spdk:cnode11137", 00:15:41.493 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:41.493 "method": "nvmf_create_subsystem", 00:15:41.493 "req_id": 1 00:15:41.493 } 00:15:41.493 Got JSON-RPC error response 00:15:41.493 response: 00:15:41.493 { 00:15:41.493 "code": -32602, 00:15:41.493 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:41.493 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:41.493 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:41.493 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23020 00:15:41.493 [2024-07-26 18:16:07.636261] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23020: invalid model number 'SPDK_Controller' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:41.752 { 00:15:41.752 "nqn": "nqn.2016-06.io.spdk:cnode23020", 00:15:41.752 "model_number": "SPDK_Controller\u001f", 00:15:41.752 "method": "nvmf_create_subsystem", 00:15:41.752 "req_id": 1 00:15:41.752 } 00:15:41.752 Got JSON-RPC error response 00:15:41.752 response: 00:15:41.752 { 00:15:41.752 "code": -32602, 00:15:41.752 "message": "Invalid MN SPDK_Controller\u001f" 00:15:41.752 }' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:41.752 { 00:15:41.752 "nqn": "nqn.2016-06.io.spdk:cnode23020", 00:15:41.752 "model_number": "SPDK_Controller\u001f", 00:15:41.752 "method": "nvmf_create_subsystem", 00:15:41.752 "req_id": 1 00:15:41.752 } 00:15:41.752 Got JSON-RPC error response 00:15:41.752 response: 00:15:41.752 { 00:15:41.752 "code": -32602, 00:15:41.752 "message": "Invalid MN SPDK_Controller\u001f" 00:15:41.752 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:41.752 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ - == \- ]] 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@29 -- # string='\-I|:v7U{}}YMcr5@"V*([' 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\-I|:v7U{}}YMcr5@"V*([' 00:15:41.753 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\-I|:v7U{}}YMcr5@"V*([' nqn.2016-06.io.spdk:cnode22227 00:15:42.012 [2024-07-26 18:16:07.941287] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22227: invalid serial number '\-I|:v7U{}}YMcr5@"V*([' 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:42.012 { 00:15:42.012 "nqn": "nqn.2016-06.io.spdk:cnode22227", 00:15:42.012 "serial_number": "\\-I|:v7U{}}YMcr5@\"V*([", 00:15:42.012 "method": "nvmf_create_subsystem", 00:15:42.012 "req_id": 1 00:15:42.012 } 00:15:42.012 Got JSON-RPC error response 00:15:42.012 response: 00:15:42.012 { 00:15:42.012 "code": -32602, 00:15:42.012 "message": "Invalid SN \\-I|:v7U{}}YMcr5@\"V*([" 00:15:42.012 }' 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:42.012 { 00:15:42.012 "nqn": "nqn.2016-06.io.spdk:cnode22227", 00:15:42.012 "serial_number": "\\-I|:v7U{}}YMcr5@\"V*([", 00:15:42.012 "method": "nvmf_create_subsystem", 00:15:42.012 "req_id": 1 00:15:42.012 } 00:15:42.012 Got JSON-RPC error response 00:15:42.012 response: 00:15:42.012 { 00:15:42.012 "code": -32602, 00:15:42.012 "message": "Invalid SN \\-I|:v7U{}}YMcr5@\"V*([" 00:15:42.012 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:42.012 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.012 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:42.013 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Ip)4W%V>K;v$xRAJ57R,RZ!0hI~4;$_Zt)eu>^]' 00:15:42.014 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Ip)4W%V>K;v$xRAJ57R,RZ!0hI~4;$_Zt)eu>^]' nqn.2016-06.io.spdk:cnode10676 00:15:42.272 [2024-07-26 18:16:08.350675] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10676: invalid model number 'Ip)4W%V>K;v$xRAJ57R,RZ!0hI~4;$_Zt)eu>^]' 00:15:42.272 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:42.272 { 00:15:42.272 "nqn": "nqn.2016-06.io.spdk:cnode10676", 00:15:42.272 "model_number": "\u007fIp)4W%V>K;v$xRAJ\u007f57R,RZ!0hI~4;$_Zt)eu>^]", 00:15:42.272 "method": "nvmf_create_subsystem", 00:15:42.272 "req_id": 1 00:15:42.272 } 00:15:42.272 Got JSON-RPC error response 00:15:42.272 response: 00:15:42.272 { 00:15:42.272 "code": -32602, 00:15:42.272 "message": "Invalid MN \u007fIp)4W%V>K;v$xRAJ\u007f57R,RZ!0hI~4;$_Zt)eu>^]" 00:15:42.272 }' 00:15:42.272 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:42.272 { 00:15:42.272 "nqn": "nqn.2016-06.io.spdk:cnode10676", 00:15:42.272 "model_number": "\u007fIp)4W%V>K;v$xRAJ\u007f57R,RZ!0hI~4;$_Zt)eu>^]", 00:15:42.272 "method": "nvmf_create_subsystem", 00:15:42.272 "req_id": 1 00:15:42.272 } 00:15:42.272 Got JSON-RPC error response 00:15:42.272 response: 00:15:42.272 { 00:15:42.272 "code": -32602, 00:15:42.272 "message": "Invalid MN \u007fIp)4W%V>K;v$xRAJ\u007f57R,RZ!0hI~4;$_Zt)eu>^]" 00:15:42.272 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:42.272 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:42.530 [2024-07-26 18:16:08.611639] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.530 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:42.788 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:42.788 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:42.788 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:42.788 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:42.788 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:43.046 [2024-07-26 18:16:09.105273] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:43.046 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:43.046 { 00:15:43.046 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:43.046 "listen_address": { 00:15:43.046 "trtype": "tcp", 00:15:43.046 "traddr": "", 00:15:43.046 "trsvcid": "4421" 00:15:43.046 }, 00:15:43.046 "method": "nvmf_subsystem_remove_listener", 00:15:43.046 "req_id": 1 00:15:43.046 } 00:15:43.046 Got JSON-RPC error response 00:15:43.046 response: 00:15:43.046 { 00:15:43.046 "code": -32602, 00:15:43.046 "message": "Invalid parameters" 00:15:43.046 }' 00:15:43.046 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:43.046 { 00:15:43.046 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:43.046 "listen_address": { 00:15:43.046 "trtype": "tcp", 00:15:43.047 "traddr": "", 00:15:43.047 "trsvcid": "4421" 00:15:43.047 }, 00:15:43.047 "method": "nvmf_subsystem_remove_listener", 00:15:43.047 "req_id": 1 00:15:43.047 } 00:15:43.047 Got JSON-RPC error response 00:15:43.047 response: 00:15:43.047 { 00:15:43.047 "code": -32602, 00:15:43.047 "message": "Invalid parameters" 00:15:43.047 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:43.047 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31168 -i 0 00:15:43.305 [2024-07-26 18:16:09.350032] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31168: invalid cntlid range [0-65519] 00:15:43.305 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:43.305 { 00:15:43.305 "nqn": "nqn.2016-06.io.spdk:cnode31168", 00:15:43.305 "min_cntlid": 0, 00:15:43.305 "method": "nvmf_create_subsystem", 00:15:43.305 "req_id": 1 00:15:43.305 } 00:15:43.305 Got JSON-RPC error response 00:15:43.305 response: 00:15:43.305 { 00:15:43.305 "code": -32602, 00:15:43.305 "message": "Invalid cntlid range [0-65519]" 00:15:43.305 }' 00:15:43.305 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:43.305 { 00:15:43.305 "nqn": "nqn.2016-06.io.spdk:cnode31168", 00:15:43.305 "min_cntlid": 0, 00:15:43.305 "method": "nvmf_create_subsystem", 00:15:43.305 "req_id": 1 00:15:43.305 } 00:15:43.305 Got JSON-RPC error response 00:15:43.305 response: 00:15:43.305 { 00:15:43.305 "code": -32602, 00:15:43.305 "message": "Invalid cntlid range [0-65519]" 00:15:43.305 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:43.305 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17376 -i 65520 00:15:43.563 [2024-07-26 18:16:09.618927] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17376: invalid cntlid range [65520-65519] 00:15:43.563 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:43.563 { 00:15:43.563 "nqn": "nqn.2016-06.io.spdk:cnode17376", 00:15:43.563 "min_cntlid": 65520, 00:15:43.563 "method": "nvmf_create_subsystem", 00:15:43.563 "req_id": 1 00:15:43.563 } 00:15:43.563 Got JSON-RPC error response 00:15:43.563 response: 00:15:43.563 { 00:15:43.563 "code": -32602, 00:15:43.563 "message": "Invalid cntlid range [65520-65519]" 00:15:43.563 }' 00:15:43.563 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:43.563 { 00:15:43.563 "nqn": "nqn.2016-06.io.spdk:cnode17376", 00:15:43.563 "min_cntlid": 65520, 00:15:43.563 "method": "nvmf_create_subsystem", 00:15:43.563 "req_id": 1 00:15:43.563 } 00:15:43.563 Got JSON-RPC error response 00:15:43.563 response: 00:15:43.563 { 00:15:43.563 "code": -32602, 00:15:43.563 "message": "Invalid cntlid range [65520-65519]" 00:15:43.563 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:43.563 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7180 -I 0 00:15:43.821 [2024-07-26 18:16:09.879852] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7180: invalid cntlid range [1-0] 00:15:43.821 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:43.821 { 00:15:43.821 "nqn": "nqn.2016-06.io.spdk:cnode7180", 00:15:43.821 "max_cntlid": 0, 00:15:43.821 "method": "nvmf_create_subsystem", 00:15:43.821 "req_id": 1 00:15:43.821 } 00:15:43.821 Got JSON-RPC error response 00:15:43.821 response: 00:15:43.821 { 00:15:43.821 "code": -32602, 00:15:43.821 "message": "Invalid cntlid range [1-0]" 00:15:43.821 }' 00:15:43.821 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:43.821 { 00:15:43.821 "nqn": "nqn.2016-06.io.spdk:cnode7180", 00:15:43.821 "max_cntlid": 0, 00:15:43.821 "method": "nvmf_create_subsystem", 00:15:43.821 "req_id": 1 00:15:43.821 } 00:15:43.821 Got JSON-RPC error response 00:15:43.821 response: 00:15:43.821 { 00:15:43.821 "code": -32602, 00:15:43.821 "message": "Invalid cntlid range [1-0]" 00:15:43.821 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:43.821 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9583 -I 65520 00:15:44.079 [2024-07-26 18:16:10.136755] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9583: invalid cntlid range [1-65520] 00:15:44.079 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:44.079 { 00:15:44.079 "nqn": "nqn.2016-06.io.spdk:cnode9583", 00:15:44.079 "max_cntlid": 65520, 00:15:44.079 "method": "nvmf_create_subsystem", 00:15:44.079 "req_id": 1 00:15:44.079 } 00:15:44.079 Got JSON-RPC error response 00:15:44.079 response: 00:15:44.079 { 00:15:44.079 "code": -32602, 00:15:44.079 "message": "Invalid cntlid range [1-65520]" 00:15:44.079 }' 00:15:44.079 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:44.079 { 00:15:44.079 "nqn": "nqn.2016-06.io.spdk:cnode9583", 00:15:44.079 "max_cntlid": 65520, 00:15:44.079 "method": "nvmf_create_subsystem", 00:15:44.079 "req_id": 1 00:15:44.079 } 00:15:44.079 Got JSON-RPC error response 00:15:44.079 response: 00:15:44.079 { 00:15:44.079 "code": -32602, 00:15:44.079 "message": "Invalid cntlid range [1-65520]" 00:15:44.079 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:44.079 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode559 -i 6 -I 5 00:15:44.337 [2024-07-26 18:16:10.397589] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode559: invalid cntlid range [6-5] 00:15:44.337 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:44.337 { 00:15:44.337 "nqn": "nqn.2016-06.io.spdk:cnode559", 00:15:44.337 "min_cntlid": 6, 00:15:44.337 "max_cntlid": 5, 00:15:44.337 "method": "nvmf_create_subsystem", 00:15:44.337 "req_id": 1 00:15:44.337 } 00:15:44.337 Got JSON-RPC error response 00:15:44.337 response: 00:15:44.337 { 00:15:44.337 "code": -32602, 00:15:44.337 "message": "Invalid cntlid range [6-5]" 00:15:44.337 }' 00:15:44.337 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:44.337 { 00:15:44.337 "nqn": "nqn.2016-06.io.spdk:cnode559", 00:15:44.337 "min_cntlid": 6, 00:15:44.337 "max_cntlid": 5, 00:15:44.337 "method": "nvmf_create_subsystem", 00:15:44.337 "req_id": 1 00:15:44.337 } 00:15:44.337 Got JSON-RPC error response 00:15:44.337 response: 00:15:44.337 { 00:15:44.337 "code": -32602, 00:15:44.337 "message": "Invalid cntlid range [6-5]" 00:15:44.337 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:44.337 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:44.595 { 00:15:44.595 "name": "foobar", 00:15:44.595 "method": "nvmf_delete_target", 00:15:44.595 "req_id": 1 00:15:44.595 } 00:15:44.595 Got JSON-RPC error response 00:15:44.595 response: 00:15:44.595 { 00:15:44.595 "code": -32602, 00:15:44.595 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:44.595 }' 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:44.595 { 00:15:44.595 "name": "foobar", 00:15:44.595 "method": "nvmf_delete_target", 00:15:44.595 "req_id": 1 00:15:44.595 } 00:15:44.595 Got JSON-RPC error response 00:15:44.595 response: 00:15:44.595 { 00:15:44.595 "code": -32602, 00:15:44.595 "message": "The specified target doesn't exist, cannot delete it." 00:15:44.595 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.595 rmmod nvme_tcp 00:15:44.595 rmmod nvme_fabrics 00:15:44.595 rmmod nvme_keyring 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1444682 ']' 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1444682 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1444682 ']' 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1444682 00:15:44.595 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:15:44.596 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.596 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1444682 00:15:44.596 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:44.596 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:44.596 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1444682' 00:15:44.596 killing process with pid 1444682 00:15:44.596 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1444682 00:15:44.596 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1444682 00:15:44.855 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.855 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.855 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.855 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.855 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.855 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.855 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.855 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.763 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:46.763 00:15:46.763 real 0m8.636s 00:15:46.763 user 0m20.288s 00:15:46.763 sys 0m2.438s 00:15:46.763 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:46.763 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:46.763 ************************************ 00:15:46.763 END TEST nvmf_invalid 00:15:46.763 ************************************ 00:15:46.763 18:16:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:46.763 18:16:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:46.763 18:16:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:46.763 18:16:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.021 ************************************ 00:15:47.021 START TEST nvmf_connect_stress 00:15:47.021 ************************************ 00:15:47.021 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:47.021 * Looking for test storage... 00:15:47.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.021 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.021 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:47.021 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.021 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:47.022 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:48.924 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:48.924 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:48.924 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:48.924 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:48.924 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:48.925 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:48.925 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:48.925 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:48.925 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:48.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:15:48.925 00:15:48.925 --- 10.0.0.2 ping statistics --- 00:15:48.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.925 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:15:48.925 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:48.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:15:48.925 00:15:48.925 --- 10.0.0.1 ping statistics --- 00:15:48.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.925 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:15:48.925 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.925 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:48.925 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:48.925 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.925 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:48.925 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:48.925 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.925 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:48.925 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.183 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:49.183 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.183 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:49.183 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.183 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1447195 00:15:49.183 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:49.183 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1447195 00:15:49.183 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1447195 ']' 00:15:49.183 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.183 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:49.183 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.183 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:49.183 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.183 [2024-07-26 18:16:15.115864] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:49.183 [2024-07-26 18:16:15.115939] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.183 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.183 [2024-07-26 18:16:15.155006] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:49.183 [2024-07-26 18:16:15.187606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:49.183 [2024-07-26 18:16:15.283718] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.183 [2024-07-26 18:16:15.283783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.183 [2024-07-26 18:16:15.283798] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.183 [2024-07-26 18:16:15.283812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.183 [2024-07-26 18:16:15.283823] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.183 [2024-07-26 18:16:15.283881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.183 [2024-07-26 18:16:15.283934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.183 [2024-07-26 18:16:15.283937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.461 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.461 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:49.461 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.461 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:49.461 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.462 [2024-07-26 18:16:15.416316] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.462 [2024-07-26 18:16:15.442163] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.462 NULL1 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1447333 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.462 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.725 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.725 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:49.725 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.725 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.726 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.295 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.295 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:50.295 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.295 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.295 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.554 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.554 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:50.554 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.554 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.554 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.812 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.812 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:50.812 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.812 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.812 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.071 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.071 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:51.071 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.071 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.071 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.330 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.330 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:51.330 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.330 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.330 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.897 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.897 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:51.897 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.897 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.897 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.155 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.155 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:52.155 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.155 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.155 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.414 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.414 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:52.414 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.414 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.414 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.674 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.674 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:52.674 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.674 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.674 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.934 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.934 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:52.934 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.934 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.934 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.502 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.502 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:53.502 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.503 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.503 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.761 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.761 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:53.761 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.761 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.761 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.020 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.020 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:54.020 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.020 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.020 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.280 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.280 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:54.280 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.280 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.280 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.540 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.540 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:54.540 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.540 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.540 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.109 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.109 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:55.109 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.109 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.109 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.367 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.367 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:55.367 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.367 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.367 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.626 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.626 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:55.626 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.626 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.626 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.884 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.884 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:55.884 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.884 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.884 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.142 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.142 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:56.142 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.142 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.142 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.709 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.709 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:56.709 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.709 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.709 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.967 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.967 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:56.967 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.967 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.967 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.225 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.225 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:57.225 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.225 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.225 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.485 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.485 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:57.485 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.485 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.485 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.744 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.744 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:57.744 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.744 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.744 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.311 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.311 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:58.311 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.311 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.311 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.569 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.569 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:58.569 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.569 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.569 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.828 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.828 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:58.828 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.828 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.828 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.088 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.088 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:59.088 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.088 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.088 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.348 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.349 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:59.349 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.349 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.349 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.607 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1447333 00:15:59.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1447333) - No such process 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1447333 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:59.869 rmmod nvme_tcp 00:15:59.869 rmmod nvme_fabrics 00:15:59.869 rmmod nvme_keyring 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1447195 ']' 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1447195 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1447195 ']' 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1447195 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1447195 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1447195' 00:15:59.869 killing process with pid 1447195 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1447195 00:15:59.869 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1447195 00:16:00.128 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:00.129 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:00.129 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:00.129 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:00.129 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:00.129 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.129 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.129 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.033 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:02.033 00:16:02.033 real 0m15.226s 00:16:02.033 user 0m38.235s 00:16:02.033 sys 0m5.918s 00:16:02.033 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.033 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.033 ************************************ 00:16:02.033 END TEST nvmf_connect_stress 00:16:02.033 ************************************ 00:16:02.033 18:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:02.033 18:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:02.033 18:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.033 18:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:02.291 ************************************ 00:16:02.291 START TEST nvmf_fused_ordering 00:16:02.291 ************************************ 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:02.291 * Looking for test storage... 00:16:02.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.291 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:02.292 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:04.196 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:04.196 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:04.196 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:04.196 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.196 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:04.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:16:04.197 00:16:04.197 --- 10.0.0.2 ping statistics --- 00:16:04.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.197 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:04.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:16:04.197 00:16:04.197 --- 10.0.0.1 ping statistics --- 00:16:04.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.197 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1450475 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1450475 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1450475 ']' 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:04.197 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:04.457 [2024-07-26 18:16:30.384929] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:04.457 [2024-07-26 18:16:30.385025] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.457 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.457 [2024-07-26 18:16:30.427210] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:04.457 [2024-07-26 18:16:30.455651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.457 [2024-07-26 18:16:30.550920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.457 [2024-07-26 18:16:30.550994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.457 [2024-07-26 18:16:30.551011] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.457 [2024-07-26 18:16:30.551023] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.457 [2024-07-26 18:16:30.551034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.457 [2024-07-26 18:16:30.551087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:04.715 [2024-07-26 18:16:30.701415] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:04.715 [2024-07-26 18:16:30.717654] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:04.715 NULL1 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.715 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:04.715 [2024-07-26 18:16:30.764165] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:04.715 [2024-07-26 18:16:30.764210] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1450502 ] 00:16:04.715 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.715 [2024-07-26 18:16:30.801434] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:05.283 Attached to nqn.2016-06.io.spdk:cnode1 00:16:05.283 Namespace ID: 1 size: 1GB 00:16:05.283 fused_ordering(0) 00:16:05.283 fused_ordering(1) 00:16:05.283 fused_ordering(2) 00:16:05.283 fused_ordering(3) 00:16:05.283 fused_ordering(4) 00:16:05.283 fused_ordering(5) 00:16:05.283 fused_ordering(6) 00:16:05.283 fused_ordering(7) 00:16:05.283 fused_ordering(8) 00:16:05.283 fused_ordering(9) 00:16:05.283 fused_ordering(10) 00:16:05.283 fused_ordering(11) 00:16:05.283 fused_ordering(12) 00:16:05.283 fused_ordering(13) 00:16:05.283 fused_ordering(14) 00:16:05.283 fused_ordering(15) 00:16:05.283 fused_ordering(16) 00:16:05.283 fused_ordering(17) 00:16:05.283 fused_ordering(18) 00:16:05.283 fused_ordering(19) 00:16:05.283 fused_ordering(20) 00:16:05.283 fused_ordering(21) 00:16:05.283 fused_ordering(22) 00:16:05.283 fused_ordering(23) 00:16:05.283 fused_ordering(24) 00:16:05.283 fused_ordering(25) 00:16:05.283 fused_ordering(26) 00:16:05.283 fused_ordering(27) 00:16:05.283 fused_ordering(28) 00:16:05.283 fused_ordering(29) 00:16:05.283 fused_ordering(30) 00:16:05.283 fused_ordering(31) 00:16:05.283 fused_ordering(32) 00:16:05.283 fused_ordering(33) 00:16:05.283 fused_ordering(34) 00:16:05.283 fused_ordering(35) 00:16:05.283 fused_ordering(36) 00:16:05.283 fused_ordering(37) 00:16:05.283 fused_ordering(38) 00:16:05.283 fused_ordering(39) 00:16:05.283 fused_ordering(40) 00:16:05.283 fused_ordering(41) 00:16:05.283 fused_ordering(42) 00:16:05.283 fused_ordering(43) 00:16:05.283 fused_ordering(44) 00:16:05.283 fused_ordering(45) 00:16:05.283 fused_ordering(46) 00:16:05.283 fused_ordering(47) 00:16:05.283 fused_ordering(48) 00:16:05.283 fused_ordering(49) 00:16:05.283 fused_ordering(50) 00:16:05.283 fused_ordering(51) 00:16:05.283 fused_ordering(52) 00:16:05.283 fused_ordering(53) 00:16:05.283 fused_ordering(54) 00:16:05.283 fused_ordering(55) 00:16:05.283 fused_ordering(56) 00:16:05.283 fused_ordering(57) 00:16:05.283 fused_ordering(58) 00:16:05.283 fused_ordering(59) 00:16:05.283 fused_ordering(60) 00:16:05.283 fused_ordering(61) 00:16:05.283 fused_ordering(62) 00:16:05.283 fused_ordering(63) 00:16:05.283 fused_ordering(64) 00:16:05.283 fused_ordering(65) 00:16:05.283 fused_ordering(66) 00:16:05.283 fused_ordering(67) 00:16:05.283 fused_ordering(68) 00:16:05.283 fused_ordering(69) 00:16:05.283 fused_ordering(70) 00:16:05.283 fused_ordering(71) 00:16:05.283 fused_ordering(72) 00:16:05.283 fused_ordering(73) 00:16:05.283 fused_ordering(74) 00:16:05.283 fused_ordering(75) 00:16:05.283 fused_ordering(76) 00:16:05.283 fused_ordering(77) 00:16:05.283 fused_ordering(78) 00:16:05.283 fused_ordering(79) 00:16:05.283 fused_ordering(80) 00:16:05.283 fused_ordering(81) 00:16:05.283 fused_ordering(82) 00:16:05.283 fused_ordering(83) 00:16:05.283 fused_ordering(84) 00:16:05.283 fused_ordering(85) 00:16:05.283 fused_ordering(86) 00:16:05.283 fused_ordering(87) 00:16:05.283 fused_ordering(88) 00:16:05.283 fused_ordering(89) 00:16:05.283 fused_ordering(90) 00:16:05.283 fused_ordering(91) 00:16:05.283 fused_ordering(92) 00:16:05.283 fused_ordering(93) 00:16:05.283 fused_ordering(94) 00:16:05.284 fused_ordering(95) 00:16:05.284 fused_ordering(96) 00:16:05.284 fused_ordering(97) 00:16:05.284 fused_ordering(98) 00:16:05.284 fused_ordering(99) 00:16:05.284 fused_ordering(100) 00:16:05.284 fused_ordering(101) 00:16:05.284 fused_ordering(102) 00:16:05.284 fused_ordering(103) 00:16:05.284 fused_ordering(104) 00:16:05.284 fused_ordering(105) 00:16:05.284 fused_ordering(106) 00:16:05.284 fused_ordering(107) 00:16:05.284 fused_ordering(108) 00:16:05.284 fused_ordering(109) 00:16:05.284 fused_ordering(110) 00:16:05.284 fused_ordering(111) 00:16:05.284 fused_ordering(112) 00:16:05.284 fused_ordering(113) 00:16:05.284 fused_ordering(114) 00:16:05.284 fused_ordering(115) 00:16:05.284 fused_ordering(116) 00:16:05.284 fused_ordering(117) 00:16:05.284 fused_ordering(118) 00:16:05.284 fused_ordering(119) 00:16:05.284 fused_ordering(120) 00:16:05.284 fused_ordering(121) 00:16:05.284 fused_ordering(122) 00:16:05.284 fused_ordering(123) 00:16:05.284 fused_ordering(124) 00:16:05.284 fused_ordering(125) 00:16:05.284 fused_ordering(126) 00:16:05.284 fused_ordering(127) 00:16:05.284 fused_ordering(128) 00:16:05.284 fused_ordering(129) 00:16:05.284 fused_ordering(130) 00:16:05.284 fused_ordering(131) 00:16:05.284 fused_ordering(132) 00:16:05.284 fused_ordering(133) 00:16:05.284 fused_ordering(134) 00:16:05.284 fused_ordering(135) 00:16:05.284 fused_ordering(136) 00:16:05.284 fused_ordering(137) 00:16:05.284 fused_ordering(138) 00:16:05.284 fused_ordering(139) 00:16:05.284 fused_ordering(140) 00:16:05.284 fused_ordering(141) 00:16:05.284 fused_ordering(142) 00:16:05.284 fused_ordering(143) 00:16:05.284 fused_ordering(144) 00:16:05.284 fused_ordering(145) 00:16:05.284 fused_ordering(146) 00:16:05.284 fused_ordering(147) 00:16:05.284 fused_ordering(148) 00:16:05.284 fused_ordering(149) 00:16:05.284 fused_ordering(150) 00:16:05.284 fused_ordering(151) 00:16:05.284 fused_ordering(152) 00:16:05.284 fused_ordering(153) 00:16:05.284 fused_ordering(154) 00:16:05.284 fused_ordering(155) 00:16:05.284 fused_ordering(156) 00:16:05.284 fused_ordering(157) 00:16:05.284 fused_ordering(158) 00:16:05.284 fused_ordering(159) 00:16:05.284 fused_ordering(160) 00:16:05.284 fused_ordering(161) 00:16:05.284 fused_ordering(162) 00:16:05.284 fused_ordering(163) 00:16:05.284 fused_ordering(164) 00:16:05.284 fused_ordering(165) 00:16:05.284 fused_ordering(166) 00:16:05.284 fused_ordering(167) 00:16:05.284 fused_ordering(168) 00:16:05.284 fused_ordering(169) 00:16:05.284 fused_ordering(170) 00:16:05.284 fused_ordering(171) 00:16:05.284 fused_ordering(172) 00:16:05.284 fused_ordering(173) 00:16:05.284 fused_ordering(174) 00:16:05.284 fused_ordering(175) 00:16:05.284 fused_ordering(176) 00:16:05.284 fused_ordering(177) 00:16:05.284 fused_ordering(178) 00:16:05.284 fused_ordering(179) 00:16:05.284 fused_ordering(180) 00:16:05.284 fused_ordering(181) 00:16:05.284 fused_ordering(182) 00:16:05.284 fused_ordering(183) 00:16:05.284 fused_ordering(184) 00:16:05.284 fused_ordering(185) 00:16:05.284 fused_ordering(186) 00:16:05.284 fused_ordering(187) 00:16:05.284 fused_ordering(188) 00:16:05.284 fused_ordering(189) 00:16:05.284 fused_ordering(190) 00:16:05.284 fused_ordering(191) 00:16:05.284 fused_ordering(192) 00:16:05.284 fused_ordering(193) 00:16:05.284 fused_ordering(194) 00:16:05.284 fused_ordering(195) 00:16:05.284 fused_ordering(196) 00:16:05.284 fused_ordering(197) 00:16:05.284 fused_ordering(198) 00:16:05.284 fused_ordering(199) 00:16:05.284 fused_ordering(200) 00:16:05.284 fused_ordering(201) 00:16:05.284 fused_ordering(202) 00:16:05.284 fused_ordering(203) 00:16:05.284 fused_ordering(204) 00:16:05.284 fused_ordering(205) 00:16:05.856 fused_ordering(206) 00:16:05.856 fused_ordering(207) 00:16:05.856 fused_ordering(208) 00:16:05.857 fused_ordering(209) 00:16:05.857 fused_ordering(210) 00:16:05.857 fused_ordering(211) 00:16:05.857 fused_ordering(212) 00:16:05.857 fused_ordering(213) 00:16:05.857 fused_ordering(214) 00:16:05.857 fused_ordering(215) 00:16:05.857 fused_ordering(216) 00:16:05.857 fused_ordering(217) 00:16:05.857 fused_ordering(218) 00:16:05.857 fused_ordering(219) 00:16:05.857 fused_ordering(220) 00:16:05.857 fused_ordering(221) 00:16:05.857 fused_ordering(222) 00:16:05.857 fused_ordering(223) 00:16:05.857 fused_ordering(224) 00:16:05.857 fused_ordering(225) 00:16:05.857 fused_ordering(226) 00:16:05.857 fused_ordering(227) 00:16:05.857 fused_ordering(228) 00:16:05.857 fused_ordering(229) 00:16:05.857 fused_ordering(230) 00:16:05.857 fused_ordering(231) 00:16:05.857 fused_ordering(232) 00:16:05.857 fused_ordering(233) 00:16:05.857 fused_ordering(234) 00:16:05.857 fused_ordering(235) 00:16:05.857 fused_ordering(236) 00:16:05.857 fused_ordering(237) 00:16:05.857 fused_ordering(238) 00:16:05.857 fused_ordering(239) 00:16:05.857 fused_ordering(240) 00:16:05.857 fused_ordering(241) 00:16:05.857 fused_ordering(242) 00:16:05.857 fused_ordering(243) 00:16:05.857 fused_ordering(244) 00:16:05.857 fused_ordering(245) 00:16:05.857 fused_ordering(246) 00:16:05.857 fused_ordering(247) 00:16:05.857 fused_ordering(248) 00:16:05.857 fused_ordering(249) 00:16:05.857 fused_ordering(250) 00:16:05.857 fused_ordering(251) 00:16:05.857 fused_ordering(252) 00:16:05.857 fused_ordering(253) 00:16:05.857 fused_ordering(254) 00:16:05.857 fused_ordering(255) 00:16:05.857 fused_ordering(256) 00:16:05.857 fused_ordering(257) 00:16:05.857 fused_ordering(258) 00:16:05.857 fused_ordering(259) 00:16:05.857 fused_ordering(260) 00:16:05.857 fused_ordering(261) 00:16:05.857 fused_ordering(262) 00:16:05.857 fused_ordering(263) 00:16:05.857 fused_ordering(264) 00:16:05.857 fused_ordering(265) 00:16:05.857 fused_ordering(266) 00:16:05.857 fused_ordering(267) 00:16:05.857 fused_ordering(268) 00:16:05.857 fused_ordering(269) 00:16:05.857 fused_ordering(270) 00:16:05.857 fused_ordering(271) 00:16:05.857 fused_ordering(272) 00:16:05.857 fused_ordering(273) 00:16:05.857 fused_ordering(274) 00:16:05.857 fused_ordering(275) 00:16:05.857 fused_ordering(276) 00:16:05.857 fused_ordering(277) 00:16:05.857 fused_ordering(278) 00:16:05.857 fused_ordering(279) 00:16:05.857 fused_ordering(280) 00:16:05.857 fused_ordering(281) 00:16:05.857 fused_ordering(282) 00:16:05.857 fused_ordering(283) 00:16:05.857 fused_ordering(284) 00:16:05.857 fused_ordering(285) 00:16:05.857 fused_ordering(286) 00:16:05.857 fused_ordering(287) 00:16:05.857 fused_ordering(288) 00:16:05.857 fused_ordering(289) 00:16:05.857 fused_ordering(290) 00:16:05.857 fused_ordering(291) 00:16:05.857 fused_ordering(292) 00:16:05.857 fused_ordering(293) 00:16:05.857 fused_ordering(294) 00:16:05.857 fused_ordering(295) 00:16:05.857 fused_ordering(296) 00:16:05.857 fused_ordering(297) 00:16:05.857 fused_ordering(298) 00:16:05.857 fused_ordering(299) 00:16:05.857 fused_ordering(300) 00:16:05.857 fused_ordering(301) 00:16:05.857 fused_ordering(302) 00:16:05.857 fused_ordering(303) 00:16:05.857 fused_ordering(304) 00:16:05.857 fused_ordering(305) 00:16:05.857 fused_ordering(306) 00:16:05.857 fused_ordering(307) 00:16:05.857 fused_ordering(308) 00:16:05.857 fused_ordering(309) 00:16:05.857 fused_ordering(310) 00:16:05.857 fused_ordering(311) 00:16:05.857 fused_ordering(312) 00:16:05.857 fused_ordering(313) 00:16:05.857 fused_ordering(314) 00:16:05.857 fused_ordering(315) 00:16:05.857 fused_ordering(316) 00:16:05.857 fused_ordering(317) 00:16:05.857 fused_ordering(318) 00:16:05.857 fused_ordering(319) 00:16:05.857 fused_ordering(320) 00:16:05.857 fused_ordering(321) 00:16:05.857 fused_ordering(322) 00:16:05.857 fused_ordering(323) 00:16:05.857 fused_ordering(324) 00:16:05.857 fused_ordering(325) 00:16:05.857 fused_ordering(326) 00:16:05.857 fused_ordering(327) 00:16:05.857 fused_ordering(328) 00:16:05.857 fused_ordering(329) 00:16:05.857 fused_ordering(330) 00:16:05.857 fused_ordering(331) 00:16:05.857 fused_ordering(332) 00:16:05.857 fused_ordering(333) 00:16:05.857 fused_ordering(334) 00:16:05.857 fused_ordering(335) 00:16:05.857 fused_ordering(336) 00:16:05.857 fused_ordering(337) 00:16:05.857 fused_ordering(338) 00:16:05.857 fused_ordering(339) 00:16:05.857 fused_ordering(340) 00:16:05.857 fused_ordering(341) 00:16:05.857 fused_ordering(342) 00:16:05.857 fused_ordering(343) 00:16:05.857 fused_ordering(344) 00:16:05.857 fused_ordering(345) 00:16:05.857 fused_ordering(346) 00:16:05.857 fused_ordering(347) 00:16:05.857 fused_ordering(348) 00:16:05.857 fused_ordering(349) 00:16:05.857 fused_ordering(350) 00:16:05.857 fused_ordering(351) 00:16:05.857 fused_ordering(352) 00:16:05.857 fused_ordering(353) 00:16:05.857 fused_ordering(354) 00:16:05.857 fused_ordering(355) 00:16:05.857 fused_ordering(356) 00:16:05.857 fused_ordering(357) 00:16:05.857 fused_ordering(358) 00:16:05.857 fused_ordering(359) 00:16:05.857 fused_ordering(360) 00:16:05.857 fused_ordering(361) 00:16:05.857 fused_ordering(362) 00:16:05.857 fused_ordering(363) 00:16:05.857 fused_ordering(364) 00:16:05.857 fused_ordering(365) 00:16:05.857 fused_ordering(366) 00:16:05.857 fused_ordering(367) 00:16:05.857 fused_ordering(368) 00:16:05.857 fused_ordering(369) 00:16:05.857 fused_ordering(370) 00:16:05.857 fused_ordering(371) 00:16:05.857 fused_ordering(372) 00:16:05.857 fused_ordering(373) 00:16:05.857 fused_ordering(374) 00:16:05.857 fused_ordering(375) 00:16:05.857 fused_ordering(376) 00:16:05.857 fused_ordering(377) 00:16:05.857 fused_ordering(378) 00:16:05.857 fused_ordering(379) 00:16:05.857 fused_ordering(380) 00:16:05.857 fused_ordering(381) 00:16:05.857 fused_ordering(382) 00:16:05.857 fused_ordering(383) 00:16:05.857 fused_ordering(384) 00:16:05.857 fused_ordering(385) 00:16:05.857 fused_ordering(386) 00:16:05.857 fused_ordering(387) 00:16:05.857 fused_ordering(388) 00:16:05.857 fused_ordering(389) 00:16:05.857 fused_ordering(390) 00:16:05.857 fused_ordering(391) 00:16:05.857 fused_ordering(392) 00:16:05.857 fused_ordering(393) 00:16:05.857 fused_ordering(394) 00:16:05.857 fused_ordering(395) 00:16:05.857 fused_ordering(396) 00:16:05.857 fused_ordering(397) 00:16:05.857 fused_ordering(398) 00:16:05.857 fused_ordering(399) 00:16:05.857 fused_ordering(400) 00:16:05.857 fused_ordering(401) 00:16:05.857 fused_ordering(402) 00:16:05.857 fused_ordering(403) 00:16:05.857 fused_ordering(404) 00:16:05.857 fused_ordering(405) 00:16:05.857 fused_ordering(406) 00:16:05.857 fused_ordering(407) 00:16:05.857 fused_ordering(408) 00:16:05.857 fused_ordering(409) 00:16:05.857 fused_ordering(410) 00:16:06.423 fused_ordering(411) 00:16:06.423 fused_ordering(412) 00:16:06.423 fused_ordering(413) 00:16:06.423 fused_ordering(414) 00:16:06.423 fused_ordering(415) 00:16:06.423 fused_ordering(416) 00:16:06.423 fused_ordering(417) 00:16:06.423 fused_ordering(418) 00:16:06.423 fused_ordering(419) 00:16:06.423 fused_ordering(420) 00:16:06.423 fused_ordering(421) 00:16:06.423 fused_ordering(422) 00:16:06.423 fused_ordering(423) 00:16:06.423 fused_ordering(424) 00:16:06.423 fused_ordering(425) 00:16:06.423 fused_ordering(426) 00:16:06.423 fused_ordering(427) 00:16:06.423 fused_ordering(428) 00:16:06.423 fused_ordering(429) 00:16:06.423 fused_ordering(430) 00:16:06.423 fused_ordering(431) 00:16:06.423 fused_ordering(432) 00:16:06.423 fused_ordering(433) 00:16:06.423 fused_ordering(434) 00:16:06.423 fused_ordering(435) 00:16:06.423 fused_ordering(436) 00:16:06.423 fused_ordering(437) 00:16:06.423 fused_ordering(438) 00:16:06.423 fused_ordering(439) 00:16:06.423 fused_ordering(440) 00:16:06.423 fused_ordering(441) 00:16:06.423 fused_ordering(442) 00:16:06.423 fused_ordering(443) 00:16:06.423 fused_ordering(444) 00:16:06.423 fused_ordering(445) 00:16:06.423 fused_ordering(446) 00:16:06.423 fused_ordering(447) 00:16:06.423 fused_ordering(448) 00:16:06.423 fused_ordering(449) 00:16:06.423 fused_ordering(450) 00:16:06.423 fused_ordering(451) 00:16:06.423 fused_ordering(452) 00:16:06.423 fused_ordering(453) 00:16:06.423 fused_ordering(454) 00:16:06.423 fused_ordering(455) 00:16:06.423 fused_ordering(456) 00:16:06.423 fused_ordering(457) 00:16:06.423 fused_ordering(458) 00:16:06.423 fused_ordering(459) 00:16:06.423 fused_ordering(460) 00:16:06.423 fused_ordering(461) 00:16:06.423 fused_ordering(462) 00:16:06.423 fused_ordering(463) 00:16:06.423 fused_ordering(464) 00:16:06.423 fused_ordering(465) 00:16:06.423 fused_ordering(466) 00:16:06.423 fused_ordering(467) 00:16:06.423 fused_ordering(468) 00:16:06.423 fused_ordering(469) 00:16:06.423 fused_ordering(470) 00:16:06.423 fused_ordering(471) 00:16:06.423 fused_ordering(472) 00:16:06.423 fused_ordering(473) 00:16:06.423 fused_ordering(474) 00:16:06.423 fused_ordering(475) 00:16:06.423 fused_ordering(476) 00:16:06.423 fused_ordering(477) 00:16:06.423 fused_ordering(478) 00:16:06.423 fused_ordering(479) 00:16:06.423 fused_ordering(480) 00:16:06.423 fused_ordering(481) 00:16:06.423 fused_ordering(482) 00:16:06.423 fused_ordering(483) 00:16:06.423 fused_ordering(484) 00:16:06.423 fused_ordering(485) 00:16:06.423 fused_ordering(486) 00:16:06.423 fused_ordering(487) 00:16:06.423 fused_ordering(488) 00:16:06.423 fused_ordering(489) 00:16:06.423 fused_ordering(490) 00:16:06.423 fused_ordering(491) 00:16:06.423 fused_ordering(492) 00:16:06.423 fused_ordering(493) 00:16:06.423 fused_ordering(494) 00:16:06.423 fused_ordering(495) 00:16:06.423 fused_ordering(496) 00:16:06.423 fused_ordering(497) 00:16:06.423 fused_ordering(498) 00:16:06.423 fused_ordering(499) 00:16:06.423 fused_ordering(500) 00:16:06.423 fused_ordering(501) 00:16:06.423 fused_ordering(502) 00:16:06.423 fused_ordering(503) 00:16:06.423 fused_ordering(504) 00:16:06.423 fused_ordering(505) 00:16:06.423 fused_ordering(506) 00:16:06.423 fused_ordering(507) 00:16:06.423 fused_ordering(508) 00:16:06.423 fused_ordering(509) 00:16:06.423 fused_ordering(510) 00:16:06.423 fused_ordering(511) 00:16:06.423 fused_ordering(512) 00:16:06.423 fused_ordering(513) 00:16:06.423 fused_ordering(514) 00:16:06.423 fused_ordering(515) 00:16:06.423 fused_ordering(516) 00:16:06.423 fused_ordering(517) 00:16:06.423 fused_ordering(518) 00:16:06.423 fused_ordering(519) 00:16:06.423 fused_ordering(520) 00:16:06.423 fused_ordering(521) 00:16:06.423 fused_ordering(522) 00:16:06.423 fused_ordering(523) 00:16:06.423 fused_ordering(524) 00:16:06.423 fused_ordering(525) 00:16:06.423 fused_ordering(526) 00:16:06.423 fused_ordering(527) 00:16:06.423 fused_ordering(528) 00:16:06.423 fused_ordering(529) 00:16:06.423 fused_ordering(530) 00:16:06.423 fused_ordering(531) 00:16:06.423 fused_ordering(532) 00:16:06.423 fused_ordering(533) 00:16:06.423 fused_ordering(534) 00:16:06.423 fused_ordering(535) 00:16:06.423 fused_ordering(536) 00:16:06.423 fused_ordering(537) 00:16:06.423 fused_ordering(538) 00:16:06.423 fused_ordering(539) 00:16:06.423 fused_ordering(540) 00:16:06.423 fused_ordering(541) 00:16:06.423 fused_ordering(542) 00:16:06.423 fused_ordering(543) 00:16:06.423 fused_ordering(544) 00:16:06.423 fused_ordering(545) 00:16:06.424 fused_ordering(546) 00:16:06.424 fused_ordering(547) 00:16:06.424 fused_ordering(548) 00:16:06.424 fused_ordering(549) 00:16:06.424 fused_ordering(550) 00:16:06.424 fused_ordering(551) 00:16:06.424 fused_ordering(552) 00:16:06.424 fused_ordering(553) 00:16:06.424 fused_ordering(554) 00:16:06.424 fused_ordering(555) 00:16:06.424 fused_ordering(556) 00:16:06.424 fused_ordering(557) 00:16:06.424 fused_ordering(558) 00:16:06.424 fused_ordering(559) 00:16:06.424 fused_ordering(560) 00:16:06.424 fused_ordering(561) 00:16:06.424 fused_ordering(562) 00:16:06.424 fused_ordering(563) 00:16:06.424 fused_ordering(564) 00:16:06.424 fused_ordering(565) 00:16:06.424 fused_ordering(566) 00:16:06.424 fused_ordering(567) 00:16:06.424 fused_ordering(568) 00:16:06.424 fused_ordering(569) 00:16:06.424 fused_ordering(570) 00:16:06.424 fused_ordering(571) 00:16:06.424 fused_ordering(572) 00:16:06.424 fused_ordering(573) 00:16:06.424 fused_ordering(574) 00:16:06.424 fused_ordering(575) 00:16:06.424 fused_ordering(576) 00:16:06.424 fused_ordering(577) 00:16:06.424 fused_ordering(578) 00:16:06.424 fused_ordering(579) 00:16:06.424 fused_ordering(580) 00:16:06.424 fused_ordering(581) 00:16:06.424 fused_ordering(582) 00:16:06.424 fused_ordering(583) 00:16:06.424 fused_ordering(584) 00:16:06.424 fused_ordering(585) 00:16:06.424 fused_ordering(586) 00:16:06.424 fused_ordering(587) 00:16:06.424 fused_ordering(588) 00:16:06.424 fused_ordering(589) 00:16:06.424 fused_ordering(590) 00:16:06.424 fused_ordering(591) 00:16:06.424 fused_ordering(592) 00:16:06.424 fused_ordering(593) 00:16:06.424 fused_ordering(594) 00:16:06.424 fused_ordering(595) 00:16:06.424 fused_ordering(596) 00:16:06.424 fused_ordering(597) 00:16:06.424 fused_ordering(598) 00:16:06.424 fused_ordering(599) 00:16:06.424 fused_ordering(600) 00:16:06.424 fused_ordering(601) 00:16:06.424 fused_ordering(602) 00:16:06.424 fused_ordering(603) 00:16:06.424 fused_ordering(604) 00:16:06.424 fused_ordering(605) 00:16:06.424 fused_ordering(606) 00:16:06.424 fused_ordering(607) 00:16:06.424 fused_ordering(608) 00:16:06.424 fused_ordering(609) 00:16:06.424 fused_ordering(610) 00:16:06.424 fused_ordering(611) 00:16:06.424 fused_ordering(612) 00:16:06.424 fused_ordering(613) 00:16:06.424 fused_ordering(614) 00:16:06.424 fused_ordering(615) 00:16:07.362 fused_ordering(616) 00:16:07.362 fused_ordering(617) 00:16:07.362 fused_ordering(618) 00:16:07.362 fused_ordering(619) 00:16:07.362 fused_ordering(620) 00:16:07.362 fused_ordering(621) 00:16:07.362 fused_ordering(622) 00:16:07.362 fused_ordering(623) 00:16:07.362 fused_ordering(624) 00:16:07.362 fused_ordering(625) 00:16:07.362 fused_ordering(626) 00:16:07.362 fused_ordering(627) 00:16:07.362 fused_ordering(628) 00:16:07.362 fused_ordering(629) 00:16:07.362 fused_ordering(630) 00:16:07.362 fused_ordering(631) 00:16:07.362 fused_ordering(632) 00:16:07.362 fused_ordering(633) 00:16:07.362 fused_ordering(634) 00:16:07.362 fused_ordering(635) 00:16:07.362 fused_ordering(636) 00:16:07.362 fused_ordering(637) 00:16:07.362 fused_ordering(638) 00:16:07.362 fused_ordering(639) 00:16:07.362 fused_ordering(640) 00:16:07.362 fused_ordering(641) 00:16:07.362 fused_ordering(642) 00:16:07.362 fused_ordering(643) 00:16:07.362 fused_ordering(644) 00:16:07.362 fused_ordering(645) 00:16:07.362 fused_ordering(646) 00:16:07.362 fused_ordering(647) 00:16:07.362 fused_ordering(648) 00:16:07.362 fused_ordering(649) 00:16:07.362 fused_ordering(650) 00:16:07.362 fused_ordering(651) 00:16:07.362 fused_ordering(652) 00:16:07.362 fused_ordering(653) 00:16:07.362 fused_ordering(654) 00:16:07.362 fused_ordering(655) 00:16:07.362 fused_ordering(656) 00:16:07.362 fused_ordering(657) 00:16:07.362 fused_ordering(658) 00:16:07.362 fused_ordering(659) 00:16:07.362 fused_ordering(660) 00:16:07.362 fused_ordering(661) 00:16:07.362 fused_ordering(662) 00:16:07.362 fused_ordering(663) 00:16:07.362 fused_ordering(664) 00:16:07.362 fused_ordering(665) 00:16:07.362 fused_ordering(666) 00:16:07.362 fused_ordering(667) 00:16:07.362 fused_ordering(668) 00:16:07.362 fused_ordering(669) 00:16:07.362 fused_ordering(670) 00:16:07.362 fused_ordering(671) 00:16:07.362 fused_ordering(672) 00:16:07.362 fused_ordering(673) 00:16:07.362 fused_ordering(674) 00:16:07.362 fused_ordering(675) 00:16:07.362 fused_ordering(676) 00:16:07.362 fused_ordering(677) 00:16:07.362 fused_ordering(678) 00:16:07.362 fused_ordering(679) 00:16:07.362 fused_ordering(680) 00:16:07.362 fused_ordering(681) 00:16:07.362 fused_ordering(682) 00:16:07.362 fused_ordering(683) 00:16:07.362 fused_ordering(684) 00:16:07.362 fused_ordering(685) 00:16:07.362 fused_ordering(686) 00:16:07.362 fused_ordering(687) 00:16:07.362 fused_ordering(688) 00:16:07.362 fused_ordering(689) 00:16:07.362 fused_ordering(690) 00:16:07.362 fused_ordering(691) 00:16:07.362 fused_ordering(692) 00:16:07.362 fused_ordering(693) 00:16:07.362 fused_ordering(694) 00:16:07.362 fused_ordering(695) 00:16:07.362 fused_ordering(696) 00:16:07.362 fused_ordering(697) 00:16:07.362 fused_ordering(698) 00:16:07.362 fused_ordering(699) 00:16:07.362 fused_ordering(700) 00:16:07.362 fused_ordering(701) 00:16:07.362 fused_ordering(702) 00:16:07.362 fused_ordering(703) 00:16:07.362 fused_ordering(704) 00:16:07.362 fused_ordering(705) 00:16:07.362 fused_ordering(706) 00:16:07.362 fused_ordering(707) 00:16:07.362 fused_ordering(708) 00:16:07.362 fused_ordering(709) 00:16:07.362 fused_ordering(710) 00:16:07.362 fused_ordering(711) 00:16:07.362 fused_ordering(712) 00:16:07.362 fused_ordering(713) 00:16:07.362 fused_ordering(714) 00:16:07.362 fused_ordering(715) 00:16:07.362 fused_ordering(716) 00:16:07.362 fused_ordering(717) 00:16:07.362 fused_ordering(718) 00:16:07.362 fused_ordering(719) 00:16:07.362 fused_ordering(720) 00:16:07.362 fused_ordering(721) 00:16:07.362 fused_ordering(722) 00:16:07.362 fused_ordering(723) 00:16:07.362 fused_ordering(724) 00:16:07.362 fused_ordering(725) 00:16:07.362 fused_ordering(726) 00:16:07.362 fused_ordering(727) 00:16:07.362 fused_ordering(728) 00:16:07.362 fused_ordering(729) 00:16:07.362 fused_ordering(730) 00:16:07.362 fused_ordering(731) 00:16:07.362 fused_ordering(732) 00:16:07.362 fused_ordering(733) 00:16:07.362 fused_ordering(734) 00:16:07.362 fused_ordering(735) 00:16:07.362 fused_ordering(736) 00:16:07.362 fused_ordering(737) 00:16:07.362 fused_ordering(738) 00:16:07.362 fused_ordering(739) 00:16:07.362 fused_ordering(740) 00:16:07.362 fused_ordering(741) 00:16:07.362 fused_ordering(742) 00:16:07.362 fused_ordering(743) 00:16:07.362 fused_ordering(744) 00:16:07.362 fused_ordering(745) 00:16:07.362 fused_ordering(746) 00:16:07.362 fused_ordering(747) 00:16:07.362 fused_ordering(748) 00:16:07.362 fused_ordering(749) 00:16:07.362 fused_ordering(750) 00:16:07.362 fused_ordering(751) 00:16:07.362 fused_ordering(752) 00:16:07.362 fused_ordering(753) 00:16:07.362 fused_ordering(754) 00:16:07.362 fused_ordering(755) 00:16:07.362 fused_ordering(756) 00:16:07.362 fused_ordering(757) 00:16:07.362 fused_ordering(758) 00:16:07.362 fused_ordering(759) 00:16:07.362 fused_ordering(760) 00:16:07.362 fused_ordering(761) 00:16:07.362 fused_ordering(762) 00:16:07.362 fused_ordering(763) 00:16:07.362 fused_ordering(764) 00:16:07.362 fused_ordering(765) 00:16:07.362 fused_ordering(766) 00:16:07.362 fused_ordering(767) 00:16:07.362 fused_ordering(768) 00:16:07.362 fused_ordering(769) 00:16:07.362 fused_ordering(770) 00:16:07.362 fused_ordering(771) 00:16:07.362 fused_ordering(772) 00:16:07.362 fused_ordering(773) 00:16:07.362 fused_ordering(774) 00:16:07.362 fused_ordering(775) 00:16:07.362 fused_ordering(776) 00:16:07.362 fused_ordering(777) 00:16:07.362 fused_ordering(778) 00:16:07.362 fused_ordering(779) 00:16:07.362 fused_ordering(780) 00:16:07.362 fused_ordering(781) 00:16:07.362 fused_ordering(782) 00:16:07.362 fused_ordering(783) 00:16:07.362 fused_ordering(784) 00:16:07.362 fused_ordering(785) 00:16:07.362 fused_ordering(786) 00:16:07.362 fused_ordering(787) 00:16:07.362 fused_ordering(788) 00:16:07.362 fused_ordering(789) 00:16:07.362 fused_ordering(790) 00:16:07.362 fused_ordering(791) 00:16:07.362 fused_ordering(792) 00:16:07.362 fused_ordering(793) 00:16:07.362 fused_ordering(794) 00:16:07.362 fused_ordering(795) 00:16:07.362 fused_ordering(796) 00:16:07.362 fused_ordering(797) 00:16:07.362 fused_ordering(798) 00:16:07.362 fused_ordering(799) 00:16:07.362 fused_ordering(800) 00:16:07.362 fused_ordering(801) 00:16:07.362 fused_ordering(802) 00:16:07.362 fused_ordering(803) 00:16:07.362 fused_ordering(804) 00:16:07.362 fused_ordering(805) 00:16:07.362 fused_ordering(806) 00:16:07.362 fused_ordering(807) 00:16:07.362 fused_ordering(808) 00:16:07.362 fused_ordering(809) 00:16:07.362 fused_ordering(810) 00:16:07.362 fused_ordering(811) 00:16:07.362 fused_ordering(812) 00:16:07.362 fused_ordering(813) 00:16:07.362 fused_ordering(814) 00:16:07.362 fused_ordering(815) 00:16:07.362 fused_ordering(816) 00:16:07.362 fused_ordering(817) 00:16:07.362 fused_ordering(818) 00:16:07.362 fused_ordering(819) 00:16:07.362 fused_ordering(820) 00:16:07.930 fused_ordering(821) 00:16:07.930 fused_ordering(822) 00:16:07.930 fused_ordering(823) 00:16:07.930 fused_ordering(824) 00:16:07.930 fused_ordering(825) 00:16:07.930 fused_ordering(826) 00:16:07.930 fused_ordering(827) 00:16:07.930 fused_ordering(828) 00:16:07.930 fused_ordering(829) 00:16:07.930 fused_ordering(830) 00:16:07.930 fused_ordering(831) 00:16:07.930 fused_ordering(832) 00:16:07.930 fused_ordering(833) 00:16:07.930 fused_ordering(834) 00:16:07.930 fused_ordering(835) 00:16:07.930 fused_ordering(836) 00:16:07.930 fused_ordering(837) 00:16:07.930 fused_ordering(838) 00:16:07.930 fused_ordering(839) 00:16:07.930 fused_ordering(840) 00:16:07.930 fused_ordering(841) 00:16:07.930 fused_ordering(842) 00:16:07.930 fused_ordering(843) 00:16:07.930 fused_ordering(844) 00:16:07.930 fused_ordering(845) 00:16:07.930 fused_ordering(846) 00:16:07.930 fused_ordering(847) 00:16:07.930 fused_ordering(848) 00:16:07.930 fused_ordering(849) 00:16:07.930 fused_ordering(850) 00:16:07.930 fused_ordering(851) 00:16:07.930 fused_ordering(852) 00:16:07.930 fused_ordering(853) 00:16:07.930 fused_ordering(854) 00:16:07.930 fused_ordering(855) 00:16:07.930 fused_ordering(856) 00:16:07.930 fused_ordering(857) 00:16:07.930 fused_ordering(858) 00:16:07.930 fused_ordering(859) 00:16:07.930 fused_ordering(860) 00:16:07.930 fused_ordering(861) 00:16:07.930 fused_ordering(862) 00:16:07.930 fused_ordering(863) 00:16:07.930 fused_ordering(864) 00:16:07.930 fused_ordering(865) 00:16:07.930 fused_ordering(866) 00:16:07.930 fused_ordering(867) 00:16:07.930 fused_ordering(868) 00:16:07.930 fused_ordering(869) 00:16:07.930 fused_ordering(870) 00:16:07.930 fused_ordering(871) 00:16:07.930 fused_ordering(872) 00:16:07.930 fused_ordering(873) 00:16:07.930 fused_ordering(874) 00:16:07.930 fused_ordering(875) 00:16:07.930 fused_ordering(876) 00:16:07.930 fused_ordering(877) 00:16:07.930 fused_ordering(878) 00:16:07.930 fused_ordering(879) 00:16:07.930 fused_ordering(880) 00:16:07.930 fused_ordering(881) 00:16:07.930 fused_ordering(882) 00:16:07.930 fused_ordering(883) 00:16:07.930 fused_ordering(884) 00:16:07.930 fused_ordering(885) 00:16:07.930 fused_ordering(886) 00:16:07.930 fused_ordering(887) 00:16:07.930 fused_ordering(888) 00:16:07.930 fused_ordering(889) 00:16:07.930 fused_ordering(890) 00:16:07.930 fused_ordering(891) 00:16:07.930 fused_ordering(892) 00:16:07.930 fused_ordering(893) 00:16:07.930 fused_ordering(894) 00:16:07.930 fused_ordering(895) 00:16:07.930 fused_ordering(896) 00:16:07.930 fused_ordering(897) 00:16:07.930 fused_ordering(898) 00:16:07.930 fused_ordering(899) 00:16:07.930 fused_ordering(900) 00:16:07.930 fused_ordering(901) 00:16:07.930 fused_ordering(902) 00:16:07.930 fused_ordering(903) 00:16:07.930 fused_ordering(904) 00:16:07.930 fused_ordering(905) 00:16:07.930 fused_ordering(906) 00:16:07.930 fused_ordering(907) 00:16:07.930 fused_ordering(908) 00:16:07.930 fused_ordering(909) 00:16:07.930 fused_ordering(910) 00:16:07.930 fused_ordering(911) 00:16:07.930 fused_ordering(912) 00:16:07.930 fused_ordering(913) 00:16:07.930 fused_ordering(914) 00:16:07.930 fused_ordering(915) 00:16:07.930 fused_ordering(916) 00:16:07.930 fused_ordering(917) 00:16:07.930 fused_ordering(918) 00:16:07.930 fused_ordering(919) 00:16:07.930 fused_ordering(920) 00:16:07.930 fused_ordering(921) 00:16:07.930 fused_ordering(922) 00:16:07.930 fused_ordering(923) 00:16:07.931 fused_ordering(924) 00:16:07.931 fused_ordering(925) 00:16:07.931 fused_ordering(926) 00:16:07.931 fused_ordering(927) 00:16:07.931 fused_ordering(928) 00:16:07.931 fused_ordering(929) 00:16:07.931 fused_ordering(930) 00:16:07.931 fused_ordering(931) 00:16:07.931 fused_ordering(932) 00:16:07.931 fused_ordering(933) 00:16:07.931 fused_ordering(934) 00:16:07.931 fused_ordering(935) 00:16:07.931 fused_ordering(936) 00:16:07.931 fused_ordering(937) 00:16:07.931 fused_ordering(938) 00:16:07.931 fused_ordering(939) 00:16:07.931 fused_ordering(940) 00:16:07.931 fused_ordering(941) 00:16:07.931 fused_ordering(942) 00:16:07.931 fused_ordering(943) 00:16:07.931 fused_ordering(944) 00:16:07.931 fused_ordering(945) 00:16:07.931 fused_ordering(946) 00:16:07.931 fused_ordering(947) 00:16:07.931 fused_ordering(948) 00:16:07.931 fused_ordering(949) 00:16:07.931 fused_ordering(950) 00:16:07.931 fused_ordering(951) 00:16:07.931 fused_ordering(952) 00:16:07.931 fused_ordering(953) 00:16:07.931 fused_ordering(954) 00:16:07.931 fused_ordering(955) 00:16:07.931 fused_ordering(956) 00:16:07.931 fused_ordering(957) 00:16:07.931 fused_ordering(958) 00:16:07.931 fused_ordering(959) 00:16:07.931 fused_ordering(960) 00:16:07.931 fused_ordering(961) 00:16:07.931 fused_ordering(962) 00:16:07.931 fused_ordering(963) 00:16:07.931 fused_ordering(964) 00:16:07.931 fused_ordering(965) 00:16:07.931 fused_ordering(966) 00:16:07.931 fused_ordering(967) 00:16:07.931 fused_ordering(968) 00:16:07.931 fused_ordering(969) 00:16:07.931 fused_ordering(970) 00:16:07.931 fused_ordering(971) 00:16:07.931 fused_ordering(972) 00:16:07.931 fused_ordering(973) 00:16:07.931 fused_ordering(974) 00:16:07.931 fused_ordering(975) 00:16:07.931 fused_ordering(976) 00:16:07.931 fused_ordering(977) 00:16:07.931 fused_ordering(978) 00:16:07.931 fused_ordering(979) 00:16:07.931 fused_ordering(980) 00:16:07.931 fused_ordering(981) 00:16:07.931 fused_ordering(982) 00:16:07.931 fused_ordering(983) 00:16:07.931 fused_ordering(984) 00:16:07.931 fused_ordering(985) 00:16:07.931 fused_ordering(986) 00:16:07.931 fused_ordering(987) 00:16:07.931 fused_ordering(988) 00:16:07.931 fused_ordering(989) 00:16:07.931 fused_ordering(990) 00:16:07.931 fused_ordering(991) 00:16:07.931 fused_ordering(992) 00:16:07.931 fused_ordering(993) 00:16:07.931 fused_ordering(994) 00:16:07.931 fused_ordering(995) 00:16:07.931 fused_ordering(996) 00:16:07.931 fused_ordering(997) 00:16:07.931 fused_ordering(998) 00:16:07.931 fused_ordering(999) 00:16:07.931 fused_ordering(1000) 00:16:07.931 fused_ordering(1001) 00:16:07.931 fused_ordering(1002) 00:16:07.931 fused_ordering(1003) 00:16:07.931 fused_ordering(1004) 00:16:07.931 fused_ordering(1005) 00:16:07.931 fused_ordering(1006) 00:16:07.931 fused_ordering(1007) 00:16:07.931 fused_ordering(1008) 00:16:07.931 fused_ordering(1009) 00:16:07.931 fused_ordering(1010) 00:16:07.931 fused_ordering(1011) 00:16:07.931 fused_ordering(1012) 00:16:07.931 fused_ordering(1013) 00:16:07.931 fused_ordering(1014) 00:16:07.931 fused_ordering(1015) 00:16:07.931 fused_ordering(1016) 00:16:07.931 fused_ordering(1017) 00:16:07.931 fused_ordering(1018) 00:16:07.931 fused_ordering(1019) 00:16:07.931 fused_ordering(1020) 00:16:07.931 fused_ordering(1021) 00:16:07.931 fused_ordering(1022) 00:16:07.931 fused_ordering(1023) 00:16:07.931 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:07.931 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:07.931 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:07.931 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:07.931 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.931 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:07.931 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.931 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.931 rmmod nvme_tcp 00:16:07.931 rmmod nvme_fabrics 00:16:07.931 rmmod nvme_keyring 00:16:07.931 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.931 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:07.931 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:07.931 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1450475 ']' 00:16:07.931 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1450475 00:16:07.931 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1450475 ']' 00:16:07.931 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1450475 00:16:07.931 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:16:07.931 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:07.931 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1450475 00:16:08.190 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:08.191 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:08.191 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1450475' 00:16:08.191 killing process with pid 1450475 00:16:08.191 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1450475 00:16:08.191 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1450475 00:16:08.191 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:08.191 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:08.191 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:08.191 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:08.191 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:08.191 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.191 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.191 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:10.729 00:16:10.729 real 0m8.189s 00:16:10.729 user 0m5.803s 00:16:10.729 sys 0m3.883s 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:10.729 ************************************ 00:16:10.729 END TEST nvmf_fused_ordering 00:16:10.729 ************************************ 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:10.729 ************************************ 00:16:10.729 START TEST nvmf_ns_masking 00:16:10.729 ************************************ 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:10.729 * Looking for test storage... 00:16:10.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=a3bff86b-5686-457c-98be-c0296c5390a4 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=795ec5fb-44c4-4eae-9a40-3949b8960258 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=53ae001f-37ae-44d0-bbbe-c1379bd8f10b 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:10.729 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:12.633 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:12.633 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:12.633 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:12.633 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:12.633 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:12.634 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:12.634 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:12.634 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:12.634 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:12.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:16:12.634 00:16:12.634 --- 10.0.0.2 ping statistics --- 00:16:12.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.634 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:12.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:16:12.634 00:16:12.634 --- 10.0.0.1 ping statistics --- 00:16:12.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.634 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:12.634 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:12.635 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:12.635 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:12.635 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1452832 00:16:12.635 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:12.635 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1452832 00:16:12.635 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1452832 ']' 00:16:12.635 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.635 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:12.635 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.635 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:12.635 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:12.635 [2024-07-26 18:16:38.678789] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:12.635 [2024-07-26 18:16:38.678877] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.635 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.635 [2024-07-26 18:16:38.716800] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:12.635 [2024-07-26 18:16:38.742706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.893 [2024-07-26 18:16:38.826025] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.893 [2024-07-26 18:16:38.826097] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.893 [2024-07-26 18:16:38.826112] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.893 [2024-07-26 18:16:38.826124] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.893 [2024-07-26 18:16:38.826133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.893 [2024-07-26 18:16:38.826158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.893 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:12.893 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:12.893 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:12.893 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:12.893 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:12.893 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.893 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:13.151 [2024-07-26 18:16:39.241837] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.151 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:13.151 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:13.151 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:13.722 Malloc1 00:16:13.722 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:13.722 Malloc2 00:16:13.980 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:14.238 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:14.497 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.497 [2024-07-26 18:16:40.632517] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.757 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:14.757 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 53ae001f-37ae-44d0-bbbe-c1379bd8f10b -a 10.0.0.2 -s 4420 -i 4 00:16:14.757 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:14.757 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:14.757 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:14.757 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:14.757 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:17.313 [ 0]:0x1 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:17.313 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:17.314 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0bc826d5f72488da905aa0f2137ebb0 00:16:17.314 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0bc826d5f72488da905aa0f2137ebb0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:17.314 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:17.314 [ 0]:0x1 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0bc826d5f72488da905aa0f2137ebb0 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0bc826d5f72488da905aa0f2137ebb0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:17.314 [ 1]:0x2 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4bc2c509c9024054b4249e789cf0d0d5 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4bc2c509c9024054b4249e789cf0d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:17.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.314 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.571 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:17.831 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:17.831 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 53ae001f-37ae-44d0-bbbe-c1379bd8f10b -a 10.0.0.2 -s 4420 -i 4 00:16:18.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:18.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:18.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:18.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:18.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:19.997 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.254 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:20.254 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.254 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:20.254 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:20.254 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:20.254 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:20.254 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:20.254 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.254 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:20.254 [ 0]:0x2 00:16:20.254 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:20.255 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.255 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4bc2c509c9024054b4249e789cf0d0d5 00:16:20.255 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4bc2c509c9024054b4249e789cf0d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.255 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:20.512 [ 0]:0x1 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0bc826d5f72488da905aa0f2137ebb0 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0bc826d5f72488da905aa0f2137ebb0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:20.512 [ 1]:0x2 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4bc2c509c9024054b4249e789cf0d0d5 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4bc2c509c9024054b4249e789cf0d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.512 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.783 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:21.041 [ 0]:0x2 00:16:21.041 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:21.041 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:21.041 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4bc2c509c9024054b4249e789cf0d0d5 00:16:21.041 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4bc2c509c9024054b4249e789cf0d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.041 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:21.041 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:21.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.041 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:21.300 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:21.300 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 53ae001f-37ae-44d0-bbbe-c1379bd8f10b -a 10.0.0.2 -s 4420 -i 4 00:16:21.300 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:21.300 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:21.300 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.300 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:21.300 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:21.300 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:23.833 [ 0]:0x1 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0bc826d5f72488da905aa0f2137ebb0 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0bc826d5f72488da905aa0f2137ebb0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:23.833 [ 1]:0x2 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4bc2c509c9024054b4249e789cf0d0d5 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4bc2c509c9024054b4249e789cf0d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:23.833 [ 0]:0x2 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:23.833 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4bc2c509c9024054b4249e789cf0d0d5 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4bc2c509c9024054b4249e789cf0d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:23.834 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:24.093 [2024-07-26 18:16:50.117068] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:24.093 request: 00:16:24.093 { 00:16:24.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.093 "nsid": 2, 00:16:24.093 "host": "nqn.2016-06.io.spdk:host1", 00:16:24.093 "method": "nvmf_ns_remove_host", 00:16:24.093 "req_id": 1 00:16:24.093 } 00:16:24.093 Got JSON-RPC error response 00:16:24.093 response: 00:16:24.093 { 00:16:24.093 "code": -32602, 00:16:24.093 "message": "Invalid parameters" 00:16:24.093 } 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:24.093 [ 0]:0x2 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4bc2c509c9024054b4249e789cf0d0d5 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4bc2c509c9024054b4249e789cf0d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:24.093 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.351 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1454324 00:16:24.351 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:24.351 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.351 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1454324 /var/tmp/host.sock 00:16:24.351 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1454324 ']' 00:16:24.351 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:24.351 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:24.351 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:24.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:24.351 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:24.351 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:24.351 [2024-07-26 18:16:50.328904] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:24.351 [2024-07-26 18:16:50.328985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454324 ] 00:16:24.351 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.351 [2024-07-26 18:16:50.360987] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:24.351 [2024-07-26 18:16:50.393087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.351 [2024-07-26 18:16:50.487972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.609 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:24.609 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:24.609 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.868 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:25.126 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid a3bff86b-5686-457c-98be-c0296c5390a4 00:16:25.126 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:25.126 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A3BFF86B5686457C98BEC0296C5390A4 -i 00:16:25.396 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 795ec5fb-44c4-4eae-9a40-3949b8960258 00:16:25.396 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:25.396 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 795EC5FB44C44EAE9A403949B8960258 -i 00:16:25.731 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:25.988 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:26.246 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:26.246 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:26.503 nvme0n1 00:16:26.503 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:26.504 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:27.071 nvme1n2 00:16:27.071 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:27.071 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:27.071 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:27.071 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:27.071 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:27.329 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:27.329 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:27.329 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:27.329 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:27.586 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ a3bff86b-5686-457c-98be-c0296c5390a4 == \a\3\b\f\f\8\6\b\-\5\6\8\6\-\4\5\7\c\-\9\8\b\e\-\c\0\2\9\6\c\5\3\9\0\a\4 ]] 00:16:27.586 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:27.586 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:27.586 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:27.845 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 795ec5fb-44c4-4eae-9a40-3949b8960258 == \7\9\5\e\c\5\f\b\-\4\4\c\4\-\4\e\a\e\-\9\a\4\0\-\3\9\4\9\b\8\9\6\0\2\5\8 ]] 00:16:27.846 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1454324 00:16:27.846 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1454324 ']' 00:16:27.846 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1454324 00:16:27.846 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:27.846 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:27.846 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1454324 00:16:27.846 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:27.846 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:27.846 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1454324' 00:16:27.846 killing process with pid 1454324 00:16:27.846 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1454324 00:16:27.846 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1454324 00:16:28.105 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.673 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:28.673 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:28.673 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.674 rmmod nvme_tcp 00:16:28.674 rmmod nvme_fabrics 00:16:28.674 rmmod nvme_keyring 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1452832 ']' 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1452832 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1452832 ']' 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1452832 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1452832 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1452832' 00:16:28.674 killing process with pid 1452832 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1452832 00:16:28.674 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1452832 00:16:28.933 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:28.933 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:28.933 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:28.933 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.933 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.933 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.933 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.933 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.839 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:30.839 00:16:30.839 real 0m20.529s 00:16:30.839 user 0m26.643s 00:16:30.839 sys 0m4.109s 00:16:30.839 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.839 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:30.839 ************************************ 00:16:30.839 END TEST nvmf_ns_masking 00:16:30.839 ************************************ 00:16:30.839 18:16:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:30.839 18:16:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:30.839 18:16:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:30.839 18:16:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.839 18:16:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:31.098 ************************************ 00:16:31.098 START TEST nvmf_nvme_cli 00:16:31.098 ************************************ 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:31.098 * Looking for test storage... 00:16:31.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:31.098 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.001 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:33.002 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:33.002 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:33.002 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:33.002 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.002 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.260 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:33.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:16:33.261 00:16:33.261 --- 10.0.0.2 ping statistics --- 00:16:33.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.261 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:16:33.261 00:16:33.261 --- 10.0.0.1 ping statistics --- 00:16:33.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.261 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1456816 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1456816 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1456816 ']' 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.261 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.261 [2024-07-26 18:16:59.278075] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:33.261 [2024-07-26 18:16:59.278150] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.261 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.261 [2024-07-26 18:16:59.317346] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:33.261 [2024-07-26 18:16:59.349293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:33.521 [2024-07-26 18:16:59.447662] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.521 [2024-07-26 18:16:59.447723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.521 [2024-07-26 18:16:59.447741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.521 [2024-07-26 18:16:59.447754] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.521 [2024-07-26 18:16:59.447766] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.521 [2024-07-26 18:16:59.449089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.521 [2024-07-26 18:16:59.449116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.521 [2024-07-26 18:16:59.449142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:33.521 [2024-07-26 18:16:59.449145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.521 [2024-07-26 18:16:59.602225] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.521 Malloc0 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.521 Malloc1 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.521 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.782 [2024-07-26 18:16:59.687946] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:16:33.782 00:16:33.782 Discovery Log Number of Records 2, Generation counter 2 00:16:33.782 =====Discovery Log Entry 0====== 00:16:33.782 trtype: tcp 00:16:33.782 adrfam: ipv4 00:16:33.782 subtype: current discovery subsystem 00:16:33.782 treq: not required 00:16:33.782 portid: 0 00:16:33.782 trsvcid: 4420 00:16:33.782 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:33.782 traddr: 10.0.0.2 00:16:33.782 eflags: explicit discovery connections, duplicate discovery information 00:16:33.782 sectype: none 00:16:33.782 =====Discovery Log Entry 1====== 00:16:33.782 trtype: tcp 00:16:33.782 adrfam: ipv4 00:16:33.782 subtype: nvme subsystem 00:16:33.782 treq: not required 00:16:33.782 portid: 0 00:16:33.782 trsvcid: 4420 00:16:33.782 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:33.782 traddr: 10.0.0.2 00:16:33.782 eflags: none 00:16:33.782 sectype: none 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:33.782 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:34.720 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:34.720 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:34.720 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.720 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:34.720 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:34.720 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:36.629 /dev/nvme0n1 ]] 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.629 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:36.889 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:36.889 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.889 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:36.889 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.889 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:36.889 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:36.889 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.889 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:36.889 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:36.889 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.889 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:36.889 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:37.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:37.149 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:37.149 rmmod nvme_tcp 00:16:37.149 rmmod nvme_fabrics 00:16:37.150 rmmod nvme_keyring 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1456816 ']' 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1456816 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1456816 ']' 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1456816 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1456816 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1456816' 00:16:37.150 killing process with pid 1456816 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1456816 00:16:37.150 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1456816 00:16:37.408 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:37.408 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:37.408 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:37.408 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.408 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:37.408 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.408 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.408 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:39.940 00:16:39.940 real 0m8.522s 00:16:39.940 user 0m16.490s 00:16:39.940 sys 0m2.265s 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:39.940 ************************************ 00:16:39.940 END TEST nvmf_nvme_cli 00:16:39.940 ************************************ 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:39.940 ************************************ 00:16:39.940 START TEST nvmf_vfio_user 00:16:39.940 ************************************ 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:39.940 * Looking for test storage... 00:16:39.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.940 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1457735 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1457735' 00:16:39.941 Process pid: 1457735 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1457735 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1457735 ']' 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:39.941 [2024-07-26 18:17:05.698193] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:39.941 [2024-07-26 18:17:05.698289] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.941 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.941 [2024-07-26 18:17:05.737242] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:39.941 [2024-07-26 18:17:05.767632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.941 [2024-07-26 18:17:05.862945] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.941 [2024-07-26 18:17:05.863007] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.941 [2024-07-26 18:17:05.863023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.941 [2024-07-26 18:17:05.863037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.941 [2024-07-26 18:17:05.863048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.941 [2024-07-26 18:17:05.863112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.941 [2024-07-26 18:17:05.865081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.941 [2024-07-26 18:17:05.865104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.941 [2024-07-26 18:17:05.865108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:39.941 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:40.876 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:41.133 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:41.133 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:41.133 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:41.133 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:41.133 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:41.390 Malloc1 00:16:41.390 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:41.647 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:41.905 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:42.163 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:42.163 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:42.163 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:42.421 Malloc2 00:16:42.421 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:42.678 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:42.936 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:43.195 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:43.195 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:43.195 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:43.195 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:43.195 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:43.195 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:43.195 [2024-07-26 18:17:09.283905] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:43.196 [2024-07-26 18:17:09.283949] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458150 ] 00:16:43.196 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.196 [2024-07-26 18:17:09.301802] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:43.196 [2024-07-26 18:17:09.319581] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:43.196 [2024-07-26 18:17:09.327567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:43.196 [2024-07-26 18:17:09.327599] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2048b9e000 00:16:43.196 [2024-07-26 18:17:09.328562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:43.196 [2024-07-26 18:17:09.329555] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:43.196 [2024-07-26 18:17:09.330562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:43.196 [2024-07-26 18:17:09.331589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:43.196 [2024-07-26 18:17:09.332575] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:43.196 [2024-07-26 18:17:09.333577] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:43.196 [2024-07-26 18:17:09.334588] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:43.196 [2024-07-26 18:17:09.335591] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:43.196 [2024-07-26 18:17:09.336600] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:43.196 [2024-07-26 18:17:09.336620] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2047960000 00:16:43.196 [2024-07-26 18:17:09.337785] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:43.460 [2024-07-26 18:17:09.353836] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:43.460 [2024-07-26 18:17:09.353874] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:43.460 [2024-07-26 18:17:09.358755] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:43.460 [2024-07-26 18:17:09.358807] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:43.460 [2024-07-26 18:17:09.358897] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:43.460 [2024-07-26 18:17:09.358923] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:43.460 [2024-07-26 18:17:09.358934] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:43.460 [2024-07-26 18:17:09.359749] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:43.460 [2024-07-26 18:17:09.359773] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:43.460 [2024-07-26 18:17:09.359786] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:43.460 [2024-07-26 18:17:09.360755] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:43.460 [2024-07-26 18:17:09.360772] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:43.460 [2024-07-26 18:17:09.360785] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:43.460 [2024-07-26 18:17:09.361760] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:43.460 [2024-07-26 18:17:09.361778] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:43.460 [2024-07-26 18:17:09.362764] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:43.460 [2024-07-26 18:17:09.362783] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:43.460 [2024-07-26 18:17:09.362792] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:43.460 [2024-07-26 18:17:09.362803] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:43.460 [2024-07-26 18:17:09.362912] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:43.460 [2024-07-26 18:17:09.362920] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:43.460 [2024-07-26 18:17:09.362928] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:43.460 [2024-07-26 18:17:09.363774] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:43.460 [2024-07-26 18:17:09.364776] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:43.460 [2024-07-26 18:17:09.365781] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:43.460 [2024-07-26 18:17:09.366782] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:43.460 [2024-07-26 18:17:09.366894] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:43.460 [2024-07-26 18:17:09.367793] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:43.460 [2024-07-26 18:17:09.367811] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:43.460 [2024-07-26 18:17:09.367819] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:43.460 [2024-07-26 18:17:09.367843] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:43.460 [2024-07-26 18:17:09.367859] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:43.460 [2024-07-26 18:17:09.367883] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:43.460 [2024-07-26 18:17:09.367893] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:43.460 [2024-07-26 18:17:09.367899] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:43.460 [2024-07-26 18:17:09.367917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:43.460 [2024-07-26 18:17:09.367980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:43.460 [2024-07-26 18:17:09.367995] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:43.460 [2024-07-26 18:17:09.368003] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:43.460 [2024-07-26 18:17:09.368010] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:43.460 [2024-07-26 18:17:09.368022] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:43.460 [2024-07-26 18:17:09.368030] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:43.460 [2024-07-26 18:17:09.368038] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:43.460 [2024-07-26 18:17:09.368069] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368083] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368103] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:43.461 [2024-07-26 18:17:09.368129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:43.461 [2024-07-26 18:17:09.368149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.461 [2024-07-26 18:17:09.368164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.461 [2024-07-26 18:17:09.368177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.461 [2024-07-26 18:17:09.368190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.461 [2024-07-26 18:17:09.368198] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368214] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:43.461 [2024-07-26 18:17:09.368241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:43.461 [2024-07-26 18:17:09.368251] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:43.461 [2024-07-26 18:17:09.368260] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368277] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:43.461 [2024-07-26 18:17:09.368314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:43.461 [2024-07-26 18:17:09.368394] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368411] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368439] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:43.461 [2024-07-26 18:17:09.368447] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:43.461 [2024-07-26 18:17:09.368456] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:43.461 [2024-07-26 18:17:09.368466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:43.461 [2024-07-26 18:17:09.368481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:43.461 [2024-07-26 18:17:09.368496] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:43.461 [2024-07-26 18:17:09.368511] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368535] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:43.461 [2024-07-26 18:17:09.368543] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:43.461 [2024-07-26 18:17:09.368549] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:43.461 [2024-07-26 18:17:09.368558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:43.461 [2024-07-26 18:17:09.368585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:43.461 [2024-07-26 18:17:09.368606] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368619] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368631] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:43.461 [2024-07-26 18:17:09.368639] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:43.461 [2024-07-26 18:17:09.368645] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:43.461 [2024-07-26 18:17:09.368654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:43.461 [2024-07-26 18:17:09.368668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:43.461 [2024-07-26 18:17:09.368681] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368691] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368705] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368717] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368725] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368734] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368742] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:43.461 [2024-07-26 18:17:09.368750] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:43.461 [2024-07-26 18:17:09.368761] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:43.461 [2024-07-26 18:17:09.368787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:43.461 [2024-07-26 18:17:09.368805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:43.461 [2024-07-26 18:17:09.368823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:43.461 [2024-07-26 18:17:09.368835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:43.461 [2024-07-26 18:17:09.368850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:43.461 [2024-07-26 18:17:09.368862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:43.461 [2024-07-26 18:17:09.368877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:43.461 [2024-07-26 18:17:09.368888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:43.461 [2024-07-26 18:17:09.368909] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:43.462 [2024-07-26 18:17:09.368919] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:43.462 [2024-07-26 18:17:09.368925] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:43.462 [2024-07-26 18:17:09.368931] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:43.462 [2024-07-26 18:17:09.368937] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:43.462 [2024-07-26 18:17:09.368946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:43.462 [2024-07-26 18:17:09.368958] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:43.462 [2024-07-26 18:17:09.368966] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:43.462 [2024-07-26 18:17:09.368972] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:43.462 [2024-07-26 18:17:09.368981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:43.462 [2024-07-26 18:17:09.368992] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:43.462 [2024-07-26 18:17:09.369000] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:43.462 [2024-07-26 18:17:09.369006] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:43.462 [2024-07-26 18:17:09.369014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:43.462 [2024-07-26 18:17:09.369026] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:43.462 [2024-07-26 18:17:09.369034] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:43.462 [2024-07-26 18:17:09.369055] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:43.462 [2024-07-26 18:17:09.369073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:43.462 [2024-07-26 18:17:09.369086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:43.462 [2024-07-26 18:17:09.369110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:43.462 [2024-07-26 18:17:09.369129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:43.462 [2024-07-26 18:17:09.369141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:43.462 ===================================================== 00:16:43.462 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:43.462 ===================================================== 00:16:43.462 Controller Capabilities/Features 00:16:43.462 ================================ 00:16:43.462 Vendor ID: 4e58 00:16:43.462 Subsystem Vendor ID: 4e58 00:16:43.462 Serial Number: SPDK1 00:16:43.462 Model Number: SPDK bdev Controller 00:16:43.462 Firmware Version: 24.09 00:16:43.462 Recommended Arb Burst: 6 00:16:43.462 IEEE OUI Identifier: 8d 6b 50 00:16:43.462 Multi-path I/O 00:16:43.462 May have multiple subsystem ports: Yes 00:16:43.462 May have multiple controllers: Yes 00:16:43.462 Associated with SR-IOV VF: No 00:16:43.462 Max Data Transfer Size: 131072 00:16:43.462 Max Number of Namespaces: 32 00:16:43.462 Max Number of I/O Queues: 127 00:16:43.462 NVMe Specification Version (VS): 1.3 00:16:43.462 NVMe Specification Version (Identify): 1.3 00:16:43.462 Maximum Queue Entries: 256 00:16:43.462 Contiguous Queues Required: Yes 00:16:43.462 Arbitration Mechanisms Supported 00:16:43.462 Weighted Round Robin: Not Supported 00:16:43.462 Vendor Specific: Not Supported 00:16:43.462 Reset Timeout: 15000 ms 00:16:43.462 Doorbell Stride: 4 bytes 00:16:43.462 NVM Subsystem Reset: Not Supported 00:16:43.462 Command Sets Supported 00:16:43.462 NVM Command Set: Supported 00:16:43.462 Boot Partition: Not Supported 00:16:43.462 Memory Page Size Minimum: 4096 bytes 00:16:43.462 Memory Page Size Maximum: 4096 bytes 00:16:43.462 Persistent Memory Region: Not Supported 00:16:43.462 Optional Asynchronous Events Supported 00:16:43.462 Namespace Attribute Notices: Supported 00:16:43.462 Firmware Activation Notices: Not Supported 00:16:43.462 ANA Change Notices: Not Supported 00:16:43.462 PLE Aggregate Log Change Notices: Not Supported 00:16:43.462 LBA Status Info Alert Notices: Not Supported 00:16:43.462 EGE Aggregate Log Change Notices: Not Supported 00:16:43.462 Normal NVM Subsystem Shutdown event: Not Supported 00:16:43.462 Zone Descriptor Change Notices: Not Supported 00:16:43.462 Discovery Log Change Notices: Not Supported 00:16:43.462 Controller Attributes 00:16:43.462 128-bit Host Identifier: Supported 00:16:43.462 Non-Operational Permissive Mode: Not Supported 00:16:43.462 NVM Sets: Not Supported 00:16:43.462 Read Recovery Levels: Not Supported 00:16:43.462 Endurance Groups: Not Supported 00:16:43.462 Predictable Latency Mode: Not Supported 00:16:43.462 Traffic Based Keep ALive: Not Supported 00:16:43.462 Namespace Granularity: Not Supported 00:16:43.462 SQ Associations: Not Supported 00:16:43.462 UUID List: Not Supported 00:16:43.462 Multi-Domain Subsystem: Not Supported 00:16:43.462 Fixed Capacity Management: Not Supported 00:16:43.462 Variable Capacity Management: Not Supported 00:16:43.462 Delete Endurance Group: Not Supported 00:16:43.462 Delete NVM Set: Not Supported 00:16:43.462 Extended LBA Formats Supported: Not Supported 00:16:43.462 Flexible Data Placement Supported: Not Supported 00:16:43.462 00:16:43.462 Controller Memory Buffer Support 00:16:43.462 ================================ 00:16:43.462 Supported: No 00:16:43.462 00:16:43.462 Persistent Memory Region Support 00:16:43.462 ================================ 00:16:43.462 Supported: No 00:16:43.462 00:16:43.462 Admin Command Set Attributes 00:16:43.462 ============================ 00:16:43.462 Security Send/Receive: Not Supported 00:16:43.462 Format NVM: Not Supported 00:16:43.462 Firmware Activate/Download: Not Supported 00:16:43.462 Namespace Management: Not Supported 00:16:43.462 Device Self-Test: Not Supported 00:16:43.463 Directives: Not Supported 00:16:43.463 NVMe-MI: Not Supported 00:16:43.463 Virtualization Management: Not Supported 00:16:43.463 Doorbell Buffer Config: Not Supported 00:16:43.463 Get LBA Status Capability: Not Supported 00:16:43.463 Command & Feature Lockdown Capability: Not Supported 00:16:43.463 Abort Command Limit: 4 00:16:43.463 Async Event Request Limit: 4 00:16:43.463 Number of Firmware Slots: N/A 00:16:43.463 Firmware Slot 1 Read-Only: N/A 00:16:43.463 Firmware Activation Without Reset: N/A 00:16:43.463 Multiple Update Detection Support: N/A 00:16:43.463 Firmware Update Granularity: No Information Provided 00:16:43.463 Per-Namespace SMART Log: No 00:16:43.463 Asymmetric Namespace Access Log Page: Not Supported 00:16:43.463 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:43.463 Command Effects Log Page: Supported 00:16:43.463 Get Log Page Extended Data: Supported 00:16:43.463 Telemetry Log Pages: Not Supported 00:16:43.463 Persistent Event Log Pages: Not Supported 00:16:43.463 Supported Log Pages Log Page: May Support 00:16:43.463 Commands Supported & Effects Log Page: Not Supported 00:16:43.463 Feature Identifiers & Effects Log Page:May Support 00:16:43.463 NVMe-MI Commands & Effects Log Page: May Support 00:16:43.463 Data Area 4 for Telemetry Log: Not Supported 00:16:43.463 Error Log Page Entries Supported: 128 00:16:43.463 Keep Alive: Supported 00:16:43.463 Keep Alive Granularity: 10000 ms 00:16:43.463 00:16:43.463 NVM Command Set Attributes 00:16:43.463 ========================== 00:16:43.463 Submission Queue Entry Size 00:16:43.463 Max: 64 00:16:43.463 Min: 64 00:16:43.463 Completion Queue Entry Size 00:16:43.463 Max: 16 00:16:43.463 Min: 16 00:16:43.463 Number of Namespaces: 32 00:16:43.463 Compare Command: Supported 00:16:43.463 Write Uncorrectable Command: Not Supported 00:16:43.463 Dataset Management Command: Supported 00:16:43.463 Write Zeroes Command: Supported 00:16:43.463 Set Features Save Field: Not Supported 00:16:43.463 Reservations: Not Supported 00:16:43.463 Timestamp: Not Supported 00:16:43.463 Copy: Supported 00:16:43.463 Volatile Write Cache: Present 00:16:43.463 Atomic Write Unit (Normal): 1 00:16:43.463 Atomic Write Unit (PFail): 1 00:16:43.463 Atomic Compare & Write Unit: 1 00:16:43.463 Fused Compare & Write: Supported 00:16:43.463 Scatter-Gather List 00:16:43.463 SGL Command Set: Supported (Dword aligned) 00:16:43.463 SGL Keyed: Not Supported 00:16:43.463 SGL Bit Bucket Descriptor: Not Supported 00:16:43.463 SGL Metadata Pointer: Not Supported 00:16:43.463 Oversized SGL: Not Supported 00:16:43.463 SGL Metadata Address: Not Supported 00:16:43.463 SGL Offset: Not Supported 00:16:43.463 Transport SGL Data Block: Not Supported 00:16:43.463 Replay Protected Memory Block: Not Supported 00:16:43.463 00:16:43.463 Firmware Slot Information 00:16:43.463 ========================= 00:16:43.463 Active slot: 1 00:16:43.463 Slot 1 Firmware Revision: 24.09 00:16:43.463 00:16:43.463 00:16:43.463 Commands Supported and Effects 00:16:43.463 ============================== 00:16:43.463 Admin Commands 00:16:43.463 -------------- 00:16:43.463 Get Log Page (02h): Supported 00:16:43.463 Identify (06h): Supported 00:16:43.463 Abort (08h): Supported 00:16:43.463 Set Features (09h): Supported 00:16:43.463 Get Features (0Ah): Supported 00:16:43.463 Asynchronous Event Request (0Ch): Supported 00:16:43.463 Keep Alive (18h): Supported 00:16:43.463 I/O Commands 00:16:43.463 ------------ 00:16:43.463 Flush (00h): Supported LBA-Change 00:16:43.463 Write (01h): Supported LBA-Change 00:16:43.463 Read (02h): Supported 00:16:43.463 Compare (05h): Supported 00:16:43.463 Write Zeroes (08h): Supported LBA-Change 00:16:43.463 Dataset Management (09h): Supported LBA-Change 00:16:43.463 Copy (19h): Supported LBA-Change 00:16:43.463 00:16:43.463 Error Log 00:16:43.463 ========= 00:16:43.463 00:16:43.463 Arbitration 00:16:43.463 =========== 00:16:43.463 Arbitration Burst: 1 00:16:43.463 00:16:43.463 Power Management 00:16:43.463 ================ 00:16:43.463 Number of Power States: 1 00:16:43.463 Current Power State: Power State #0 00:16:43.463 Power State #0: 00:16:43.463 Max Power: 0.00 W 00:16:43.463 Non-Operational State: Operational 00:16:43.463 Entry Latency: Not Reported 00:16:43.463 Exit Latency: Not Reported 00:16:43.463 Relative Read Throughput: 0 00:16:43.463 Relative Read Latency: 0 00:16:43.463 Relative Write Throughput: 0 00:16:43.463 Relative Write Latency: 0 00:16:43.463 Idle Power: Not Reported 00:16:43.463 Active Power: Not Reported 00:16:43.463 Non-Operational Permissive Mode: Not Supported 00:16:43.463 00:16:43.463 Health Information 00:16:43.463 ================== 00:16:43.463 Critical Warnings: 00:16:43.463 Available Spare Space: OK 00:16:43.463 Temperature: OK 00:16:43.463 Device Reliability: OK 00:16:43.463 Read Only: No 00:16:43.463 Volatile Memory Backup: OK 00:16:43.463 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:43.463 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:43.463 Available Spare: 0% 00:16:43.463 Available Sp[2024-07-26 18:17:09.369264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:43.464 [2024-07-26 18:17:09.369281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:43.464 [2024-07-26 18:17:09.369323] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:43.464 [2024-07-26 18:17:09.369356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.464 [2024-07-26 18:17:09.369368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.464 [2024-07-26 18:17:09.369378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.464 [2024-07-26 18:17:09.369388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.464 [2024-07-26 18:17:09.373070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:43.464 [2024-07-26 18:17:09.373092] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:43.464 [2024-07-26 18:17:09.373820] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:43.464 [2024-07-26 18:17:09.373906] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:43.464 [2024-07-26 18:17:09.373920] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:43.464 [2024-07-26 18:17:09.374832] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:43.464 [2024-07-26 18:17:09.374855] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:43.464 [2024-07-26 18:17:09.374907] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:43.464 [2024-07-26 18:17:09.376872] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:43.464 are Threshold: 0% 00:16:43.464 Life Percentage Used: 0% 00:16:43.464 Data Units Read: 0 00:16:43.464 Data Units Written: 0 00:16:43.464 Host Read Commands: 0 00:16:43.464 Host Write Commands: 0 00:16:43.464 Controller Busy Time: 0 minutes 00:16:43.464 Power Cycles: 0 00:16:43.464 Power On Hours: 0 hours 00:16:43.464 Unsafe Shutdowns: 0 00:16:43.464 Unrecoverable Media Errors: 0 00:16:43.464 Lifetime Error Log Entries: 0 00:16:43.464 Warning Temperature Time: 0 minutes 00:16:43.464 Critical Temperature Time: 0 minutes 00:16:43.464 00:16:43.464 Number of Queues 00:16:43.464 ================ 00:16:43.464 Number of I/O Submission Queues: 127 00:16:43.464 Number of I/O Completion Queues: 127 00:16:43.464 00:16:43.464 Active Namespaces 00:16:43.464 ================= 00:16:43.464 Namespace ID:1 00:16:43.464 Error Recovery Timeout: Unlimited 00:16:43.464 Command Set Identifier: NVM (00h) 00:16:43.464 Deallocate: Supported 00:16:43.464 Deallocated/Unwritten Error: Not Supported 00:16:43.464 Deallocated Read Value: Unknown 00:16:43.464 Deallocate in Write Zeroes: Not Supported 00:16:43.464 Deallocated Guard Field: 0xFFFF 00:16:43.464 Flush: Supported 00:16:43.464 Reservation: Supported 00:16:43.464 Namespace Sharing Capabilities: Multiple Controllers 00:16:43.464 Size (in LBAs): 131072 (0GiB) 00:16:43.464 Capacity (in LBAs): 131072 (0GiB) 00:16:43.464 Utilization (in LBAs): 131072 (0GiB) 00:16:43.464 NGUID: 8DEDA8888A0045BA9C08C0028C7BD123 00:16:43.464 UUID: 8deda888-8a00-45ba-9c08-c0028c7bd123 00:16:43.464 Thin Provisioning: Not Supported 00:16:43.464 Per-NS Atomic Units: Yes 00:16:43.464 Atomic Boundary Size (Normal): 0 00:16:43.464 Atomic Boundary Size (PFail): 0 00:16:43.464 Atomic Boundary Offset: 0 00:16:43.464 Maximum Single Source Range Length: 65535 00:16:43.464 Maximum Copy Length: 65535 00:16:43.464 Maximum Source Range Count: 1 00:16:43.464 NGUID/EUI64 Never Reused: No 00:16:43.464 Namespace Write Protected: No 00:16:43.464 Number of LBA Formats: 1 00:16:43.464 Current LBA Format: LBA Format #00 00:16:43.464 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:43.464 00:16:43.464 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:43.464 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.754 [2024-07-26 18:17:09.611917] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:49.025 Initializing NVMe Controllers 00:16:49.025 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:49.025 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:49.025 Initialization complete. Launching workers. 00:16:49.025 ======================================================== 00:16:49.025 Latency(us) 00:16:49.025 Device Information : IOPS MiB/s Average min max 00:16:49.025 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33746.18 131.82 3792.81 1159.85 7547.96 00:16:49.025 ======================================================== 00:16:49.025 Total : 33746.18 131.82 3792.81 1159.85 7547.96 00:16:49.025 00:16:49.025 [2024-07-26 18:17:14.634509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:49.025 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:49.025 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.025 [2024-07-26 18:17:14.863624] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:54.315 Initializing NVMe Controllers 00:16:54.315 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:54.315 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:54.315 Initialization complete. Launching workers. 00:16:54.315 ======================================================== 00:16:54.315 Latency(us) 00:16:54.315 Device Information : IOPS MiB/s Average min max 00:16:54.315 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7993.87 4981.63 15974.15 00:16:54.315 ======================================================== 00:16:54.315 Total : 16025.60 62.60 7993.87 4981.63 15974.15 00:16:54.315 00:16:54.315 [2024-07-26 18:17:19.898758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:54.315 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:54.315 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.315 [2024-07-26 18:17:20.115934] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:59.591 [2024-07-26 18:17:25.189459] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:59.591 Initializing NVMe Controllers 00:16:59.591 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:59.591 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:59.591 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:59.591 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:59.591 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:59.591 Initialization complete. Launching workers. 00:16:59.591 Starting thread on core 2 00:16:59.591 Starting thread on core 3 00:16:59.591 Starting thread on core 1 00:16:59.591 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:59.591 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.591 [2024-07-26 18:17:25.492333] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:03.783 [2024-07-26 18:17:29.173312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:03.783 Initializing NVMe Controllers 00:17:03.783 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:03.783 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:03.783 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:03.783 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:03.783 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:03.783 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:03.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:03.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:03.783 Initialization complete. Launching workers. 00:17:03.783 Starting thread on core 1 with urgent priority queue 00:17:03.783 Starting thread on core 2 with urgent priority queue 00:17:03.783 Starting thread on core 3 with urgent priority queue 00:17:03.783 Starting thread on core 0 with urgent priority queue 00:17:03.784 SPDK bdev Controller (SPDK1 ) core 0: 2296.33 IO/s 43.55 secs/100000 ios 00:17:03.784 SPDK bdev Controller (SPDK1 ) core 1: 2579.67 IO/s 38.76 secs/100000 ios 00:17:03.784 SPDK bdev Controller (SPDK1 ) core 2: 2713.33 IO/s 36.86 secs/100000 ios 00:17:03.784 SPDK bdev Controller (SPDK1 ) core 3: 2608.67 IO/s 38.33 secs/100000 ios 00:17:03.784 ======================================================== 00:17:03.784 00:17:03.784 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:03.784 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.784 [2024-07-26 18:17:29.456653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:03.784 Initializing NVMe Controllers 00:17:03.784 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:03.784 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:03.784 Namespace ID: 1 size: 0GB 00:17:03.784 Initialization complete. 00:17:03.784 INFO: using host memory buffer for IO 00:17:03.784 Hello world! 00:17:03.784 [2024-07-26 18:17:29.490287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:03.784 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:03.784 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.784 [2024-07-26 18:17:29.787587] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:04.719 Initializing NVMe Controllers 00:17:04.719 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:04.719 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:04.719 Initialization complete. Launching workers. 00:17:04.719 submit (in ns) avg, min, max = 7259.0, 3552.2, 4023662.2 00:17:04.719 complete (in ns) avg, min, max = 25006.5, 2072.2, 6993365.6 00:17:04.719 00:17:04.719 Submit histogram 00:17:04.719 ================ 00:17:04.719 Range in us Cumulative Count 00:17:04.719 3.532 - 3.556: 0.0076% ( 1) 00:17:04.719 3.556 - 3.579: 1.0366% ( 136) 00:17:04.719 3.579 - 3.603: 2.9510% ( 253) 00:17:04.719 3.603 - 3.627: 7.9979% ( 667) 00:17:04.719 3.627 - 3.650: 16.9265% ( 1180) 00:17:04.719 3.650 - 3.674: 26.9143% ( 1320) 00:17:04.719 3.674 - 3.698: 35.0106% ( 1070) 00:17:04.719 3.698 - 3.721: 41.2076% ( 819) 00:17:04.719 3.721 - 3.745: 46.0048% ( 634) 00:17:04.719 3.745 - 3.769: 50.5448% ( 600) 00:17:04.719 3.769 - 3.793: 54.4719% ( 519) 00:17:04.719 3.793 - 3.816: 57.8087% ( 441) 00:17:04.719 3.816 - 3.840: 60.7521% ( 389) 00:17:04.719 3.840 - 3.864: 63.8923% ( 415) 00:17:04.719 3.864 - 3.887: 67.6907% ( 502) 00:17:04.719 3.887 - 3.911: 71.9431% ( 562) 00:17:04.719 3.911 - 3.935: 75.9912% ( 535) 00:17:04.719 3.935 - 3.959: 78.6093% ( 346) 00:17:04.719 3.959 - 3.982: 80.7506% ( 283) 00:17:04.719 3.982 - 4.006: 82.4607% ( 226) 00:17:04.719 4.006 - 4.030: 83.8605% ( 185) 00:17:04.719 4.030 - 4.053: 84.9879% ( 149) 00:17:04.719 4.053 - 4.077: 86.2591% ( 168) 00:17:04.719 4.077 - 4.101: 87.0611% ( 106) 00:17:04.719 4.101 - 4.124: 87.7951% ( 97) 00:17:04.719 4.124 - 4.148: 88.4383% ( 85) 00:17:04.719 4.148 - 4.172: 88.9831% ( 72) 00:17:04.719 4.172 - 4.196: 89.4673% ( 64) 00:17:04.719 4.196 - 4.219: 89.8154% ( 46) 00:17:04.719 4.219 - 4.243: 90.0802% ( 35) 00:17:04.719 4.243 - 4.267: 90.2996% ( 29) 00:17:04.719 4.267 - 4.290: 90.5645% ( 35) 00:17:04.719 4.290 - 4.314: 90.7763% ( 28) 00:17:04.719 4.314 - 4.338: 91.0185% ( 32) 00:17:04.719 4.338 - 4.361: 91.2757% ( 34) 00:17:04.720 4.361 - 4.385: 91.5103% ( 31) 00:17:04.720 4.385 - 4.409: 91.8054% ( 39) 00:17:04.720 4.409 - 4.433: 91.9492% ( 19) 00:17:04.720 4.433 - 4.456: 92.2140% ( 35) 00:17:04.720 4.456 - 4.480: 92.4334% ( 29) 00:17:04.720 4.480 - 4.504: 92.6453% ( 28) 00:17:04.720 4.504 - 4.527: 92.8193% ( 23) 00:17:04.720 4.527 - 4.551: 93.0085% ( 25) 00:17:04.720 4.551 - 4.575: 93.2052% ( 26) 00:17:04.720 4.575 - 4.599: 93.3792% ( 23) 00:17:04.720 4.599 - 4.622: 93.6138% ( 31) 00:17:04.720 4.622 - 4.646: 93.7803% ( 22) 00:17:04.720 4.646 - 4.670: 94.0148% ( 31) 00:17:04.720 4.670 - 4.693: 94.2343% ( 29) 00:17:04.720 4.693 - 4.717: 94.4386% ( 27) 00:17:04.720 4.717 - 4.741: 94.5899% ( 20) 00:17:04.720 4.741 - 4.764: 94.8245% ( 31) 00:17:04.720 4.764 - 4.788: 95.0212% ( 26) 00:17:04.720 4.788 - 4.812: 95.2179% ( 26) 00:17:04.720 4.812 - 4.836: 95.3995% ( 24) 00:17:04.720 4.836 - 4.859: 95.5357% ( 18) 00:17:04.720 4.859 - 4.883: 95.7854% ( 33) 00:17:04.720 4.883 - 4.907: 95.9292% ( 19) 00:17:04.720 4.907 - 4.930: 96.0956% ( 22) 00:17:04.720 4.930 - 4.954: 96.2394% ( 19) 00:17:04.720 4.954 - 4.978: 96.2999% ( 8) 00:17:04.720 4.978 - 5.001: 96.4437% ( 19) 00:17:04.720 5.001 - 5.025: 96.6026% ( 21) 00:17:04.720 5.025 - 5.049: 96.7388% ( 18) 00:17:04.720 5.049 - 5.073: 96.8220% ( 11) 00:17:04.720 5.073 - 5.096: 96.9431% ( 16) 00:17:04.720 5.096 - 5.120: 97.0490% ( 14) 00:17:04.720 5.120 - 5.144: 97.1474% ( 13) 00:17:04.720 5.144 - 5.167: 97.2533% ( 14) 00:17:04.720 5.167 - 5.191: 97.3366% ( 11) 00:17:04.720 5.191 - 5.215: 97.3971% ( 8) 00:17:04.720 5.215 - 5.239: 97.4501% ( 7) 00:17:04.720 5.239 - 5.262: 97.5030% ( 7) 00:17:04.720 5.262 - 5.286: 97.5409% ( 5) 00:17:04.720 5.286 - 5.310: 97.5636% ( 3) 00:17:04.720 5.310 - 5.333: 97.5938% ( 4) 00:17:04.720 5.333 - 5.357: 97.6317% ( 5) 00:17:04.720 5.357 - 5.381: 97.6846% ( 7) 00:17:04.720 5.381 - 5.404: 97.7225% ( 5) 00:17:04.720 5.404 - 5.428: 97.7300% ( 1) 00:17:04.720 5.428 - 5.452: 97.7376% ( 1) 00:17:04.720 5.452 - 5.476: 97.7527% ( 2) 00:17:04.720 5.476 - 5.499: 97.7830% ( 4) 00:17:04.720 5.523 - 5.547: 97.7906% ( 1) 00:17:04.720 5.570 - 5.594: 97.7981% ( 1) 00:17:04.720 5.594 - 5.618: 97.8057% ( 1) 00:17:04.720 5.618 - 5.641: 97.8208% ( 2) 00:17:04.720 5.665 - 5.689: 97.8284% ( 1) 00:17:04.720 5.689 - 5.713: 97.8360% ( 1) 00:17:04.720 5.713 - 5.736: 97.8435% ( 1) 00:17:04.720 5.760 - 5.784: 97.8814% ( 5) 00:17:04.720 5.784 - 5.807: 97.8889% ( 1) 00:17:04.720 5.807 - 5.831: 97.8965% ( 1) 00:17:04.720 5.831 - 5.855: 97.9268% ( 4) 00:17:04.720 5.855 - 5.879: 97.9495% ( 3) 00:17:04.720 5.879 - 5.902: 97.9722% ( 3) 00:17:04.720 5.902 - 5.926: 97.9797% ( 1) 00:17:04.720 5.950 - 5.973: 97.9873% ( 1) 00:17:04.720 5.997 - 6.021: 97.9949% ( 1) 00:17:04.720 6.021 - 6.044: 98.0024% ( 1) 00:17:04.720 6.044 - 6.068: 98.0100% ( 1) 00:17:04.720 6.116 - 6.163: 98.0327% ( 3) 00:17:04.720 6.210 - 6.258: 98.0554% ( 3) 00:17:04.720 6.258 - 6.305: 98.0630% ( 1) 00:17:04.720 6.305 - 6.353: 98.0857% ( 3) 00:17:04.720 6.353 - 6.400: 98.1008% ( 2) 00:17:04.720 6.400 - 6.447: 98.1159% ( 2) 00:17:04.720 6.447 - 6.495: 98.1311% ( 2) 00:17:04.720 6.542 - 6.590: 98.1538% ( 3) 00:17:04.720 6.590 - 6.637: 98.1689% ( 2) 00:17:04.720 6.637 - 6.684: 98.1765% ( 1) 00:17:04.720 6.684 - 6.732: 98.1840% ( 1) 00:17:04.720 6.732 - 6.779: 98.1916% ( 1) 00:17:04.720 6.779 - 6.827: 98.1992% ( 1) 00:17:04.720 6.921 - 6.969: 98.2067% ( 1) 00:17:04.720 6.969 - 7.016: 98.2143% ( 1) 00:17:04.720 7.016 - 7.064: 98.2219% ( 1) 00:17:04.720 7.111 - 7.159: 98.2294% ( 1) 00:17:04.720 7.159 - 7.206: 98.2370% ( 1) 00:17:04.720 7.206 - 7.253: 98.2446% ( 1) 00:17:04.720 7.301 - 7.348: 98.2521% ( 1) 00:17:04.720 7.348 - 7.396: 98.2673% ( 2) 00:17:04.720 7.538 - 7.585: 98.2748% ( 1) 00:17:04.720 7.585 - 7.633: 98.2900% ( 2) 00:17:04.720 7.633 - 7.680: 98.2975% ( 1) 00:17:04.720 7.822 - 7.870: 98.3051% ( 1) 00:17:04.720 7.964 - 8.012: 98.3127% ( 1) 00:17:04.720 8.012 - 8.059: 98.3354% ( 3) 00:17:04.720 8.201 - 8.249: 98.3429% ( 1) 00:17:04.720 8.344 - 8.391: 98.3505% ( 1) 00:17:04.720 8.533 - 8.581: 98.3656% ( 2) 00:17:04.720 8.676 - 8.723: 98.3808% ( 2) 00:17:04.720 8.723 - 8.770: 98.3883% ( 1) 00:17:04.720 8.818 - 8.865: 98.3959% ( 1) 00:17:04.720 9.055 - 9.102: 98.4035% ( 1) 00:17:04.720 9.102 - 9.150: 98.4110% ( 1) 00:17:04.720 9.150 - 9.197: 98.4186% ( 1) 00:17:04.720 9.339 - 9.387: 98.4262% ( 1) 00:17:04.720 9.481 - 9.529: 98.4337% ( 1) 00:17:04.720 9.719 - 9.766: 98.4564% ( 3) 00:17:04.720 9.861 - 9.908: 98.4640% ( 1) 00:17:04.720 10.240 - 10.287: 98.4715% ( 1) 00:17:04.720 10.430 - 10.477: 98.4791% ( 1) 00:17:04.720 10.761 - 10.809: 98.5018% ( 3) 00:17:04.720 11.141 - 11.188: 98.5094% ( 1) 00:17:04.720 11.662 - 11.710: 98.5321% ( 3) 00:17:04.720 11.757 - 11.804: 98.5396% ( 1) 00:17:04.720 11.899 - 11.947: 98.5472% ( 1) 00:17:04.720 11.947 - 11.994: 98.5548% ( 1) 00:17:04.720 12.089 - 12.136: 98.5623% ( 1) 00:17:04.720 12.136 - 12.231: 98.5775% ( 2) 00:17:04.720 12.231 - 12.326: 98.6077% ( 4) 00:17:04.720 12.326 - 12.421: 98.6153% ( 1) 00:17:04.720 12.516 - 12.610: 98.6229% ( 1) 00:17:04.720 12.610 - 12.705: 98.6304% ( 1) 00:17:04.720 12.800 - 12.895: 98.6456% ( 2) 00:17:04.720 12.990 - 13.084: 98.6531% ( 1) 00:17:04.720 13.084 - 13.179: 98.6607% ( 1) 00:17:04.720 13.179 - 13.274: 98.6834% ( 3) 00:17:04.720 13.274 - 13.369: 98.6910% ( 1) 00:17:04.720 13.464 - 13.559: 98.6985% ( 1) 00:17:04.720 13.559 - 13.653: 98.7061% ( 1) 00:17:04.720 13.653 - 13.748: 98.7137% ( 1) 00:17:04.720 13.748 - 13.843: 98.7212% ( 1) 00:17:04.720 13.938 - 14.033: 98.7288% ( 1) 00:17:04.720 14.127 - 14.222: 98.7364% ( 1) 00:17:04.720 14.412 - 14.507: 98.7515% ( 2) 00:17:04.720 15.076 - 15.170: 98.7591% ( 1) 00:17:04.720 15.265 - 15.360: 98.7666% ( 1) 00:17:04.720 15.550 - 15.644: 98.7742% ( 1) 00:17:04.720 16.498 - 16.593: 98.7818% ( 1) 00:17:04.720 17.067 - 17.161: 98.7893% ( 1) 00:17:04.720 17.161 - 17.256: 98.7969% ( 1) 00:17:04.720 17.256 - 17.351: 98.8120% ( 2) 00:17:04.720 17.351 - 17.446: 98.8423% ( 4) 00:17:04.720 17.446 - 17.541: 98.8726% ( 4) 00:17:04.720 17.541 - 17.636: 98.9180% ( 6) 00:17:04.720 17.636 - 17.730: 98.9407% ( 3) 00:17:04.720 17.730 - 17.825: 99.0012% ( 8) 00:17:04.720 17.825 - 17.920: 99.0239% ( 3) 00:17:04.720 17.920 - 18.015: 99.0542% ( 4) 00:17:04.720 18.015 - 18.110: 99.1298% ( 10) 00:17:04.720 18.110 - 18.204: 99.2206% ( 12) 00:17:04.720 18.204 - 18.299: 99.2509% ( 4) 00:17:04.720 18.299 - 18.394: 99.3493% ( 13) 00:17:04.720 18.394 - 18.489: 99.4022% ( 7) 00:17:04.720 18.489 - 18.584: 99.4552% ( 7) 00:17:04.720 18.584 - 18.679: 99.5006% ( 6) 00:17:04.721 18.679 - 18.773: 99.5536% ( 7) 00:17:04.721 18.773 - 18.868: 99.5914% ( 5) 00:17:04.721 18.868 - 18.963: 99.6217% ( 4) 00:17:04.721 18.963 - 19.058: 99.6671% ( 6) 00:17:04.721 19.058 - 19.153: 99.6973% ( 4) 00:17:04.721 19.153 - 19.247: 99.7352% ( 5) 00:17:04.721 19.342 - 19.437: 99.7654% ( 4) 00:17:04.721 19.437 - 19.532: 99.7806% ( 2) 00:17:04.721 19.532 - 19.627: 99.7881% ( 1) 00:17:04.721 19.721 - 19.816: 99.7957% ( 1) 00:17:04.721 19.816 - 19.911: 99.8033% ( 1) 00:17:04.721 19.911 - 20.006: 99.8184% ( 2) 00:17:04.721 20.290 - 20.385: 99.8260% ( 1) 00:17:04.721 20.480 - 20.575: 99.8335% ( 1) 00:17:04.721 20.575 - 20.670: 99.8411% ( 1) 00:17:04.721 22.092 - 22.187: 99.8562% ( 2) 00:17:04.721 22.756 - 22.850: 99.8638% ( 1) 00:17:04.721 23.135 - 23.230: 99.8714% ( 1) 00:17:04.721 24.273 - 24.462: 99.8789% ( 1) 00:17:04.721 24.462 - 24.652: 99.8865% ( 1) 00:17:04.721 26.359 - 26.548: 99.8941% ( 1) 00:17:04.721 26.927 - 27.117: 99.9016% ( 1) 00:17:04.721 27.686 - 27.876: 99.9092% ( 1) 00:17:04.721 29.203 - 29.393: 99.9168% ( 1) 00:17:04.721 2099.579 - 2111.716: 99.9243% ( 1) 00:17:04.721 3980.705 - 4004.978: 99.9773% ( 7) 00:17:04.721 4004.978 - 4029.250: 100.0000% ( 3) 00:17:04.721 00:17:04.721 Complete histogram 00:17:04.721 ================== 00:17:04.721 Range in us Cumulative Count 00:17:04.721 2.062 - 2.074: 0.0605% ( 8) 00:17:04.721 2.074 - 2.086: 15.4661% ( 2036) 00:17:04.721 2.086 - 2.098: 41.4952% ( 3440) 00:17:04.721 2.098 - 2.110: 44.7261% ( 427) 00:17:04.721 2.110 - 2.121: 51.7327% ( 926) 00:17:04.721 2.121 - 2.133: 56.2576% ( 598) 00:17:04.721 2.133 - 2.145: 57.9449% ( 223) 00:17:04.721 2.145 - 2.157: 64.5354% ( 871) 00:17:04.721 2.157 - 2.169: 69.1737% ( 613) 00:17:04.721 2.169 - 2.181: 70.0742% ( 119) 00:17:04.721 2.181 - 2.193: 73.0327% ( 391) 00:17:04.721 2.193 - 2.204: 74.8335% ( 238) 00:17:04.721 2.204 - 2.216: 75.3329% ( 66) 00:17:04.721 2.216 - 2.228: 78.8514% ( 465) 00:17:04.721 2.228 - 2.240: 82.6574% ( 503) 00:17:04.721 2.240 - 2.252: 84.0345% ( 182) 00:17:04.721 2.252 - 2.264: 85.4797% ( 191) 00:17:04.721 2.264 - 2.276: 86.4331% ( 126) 00:17:04.721 2.276 - 2.287: 86.7736% ( 45) 00:17:04.721 2.287 - 2.299: 87.2427% ( 62) 00:17:04.721 2.299 - 2.311: 88.0221% ( 103) 00:17:04.721 2.311 - 2.323: 88.4837% ( 61) 00:17:04.721 2.323 - 2.335: 88.5366% ( 7) 00:17:04.721 2.335 - 2.347: 88.5745% ( 5) 00:17:04.721 2.347 - 2.359: 88.7258% ( 20) 00:17:04.721 2.359 - 2.370: 89.0663% ( 45) 00:17:04.721 2.370 - 2.382: 89.4143% ( 46) 00:17:04.721 2.382 - 2.394: 90.0802% ( 88) 00:17:04.721 2.394 - 2.406: 90.2921% ( 28) 00:17:04.721 2.406 - 2.418: 90.5342% ( 32) 00:17:04.721 2.418 - 2.430: 90.6628% ( 17) 00:17:04.721 2.430 - 2.441: 90.8217% ( 21) 00:17:04.721 2.441 - 2.453: 90.9352% ( 15) 00:17:04.721 2.453 - 2.465: 91.1244% ( 25) 00:17:04.721 2.465 - 2.477: 91.2833% ( 21) 00:17:04.721 2.477 - 2.489: 91.4044% ( 16) 00:17:04.721 2.489 - 2.501: 91.5406% ( 18) 00:17:04.721 2.501 - 2.513: 91.7373% ( 26) 00:17:04.721 2.513 - 2.524: 91.8054% ( 9) 00:17:04.721 2.524 - 2.536: 91.9416% ( 18) 00:17:04.721 2.536 - 2.548: 92.0702% ( 17) 00:17:04.721 2.548 - 2.560: 92.3350% ( 35) 00:17:04.721 2.560 - 2.572: 92.5620% ( 30) 00:17:04.721 2.572 - 2.584: 92.7739% ( 28) 00:17:04.721 2.584 - 2.596: 92.9177% ( 19) 00:17:04.721 2.596 - 2.607: 93.0387% ( 16) 00:17:04.721 2.607 - 2.619: 93.2052% ( 22) 00:17:04.721 2.619 - 2.631: 93.3490% ( 19) 00:17:04.721 2.631 - 2.643: 93.5230% ( 23) 00:17:04.721 2.643 - 2.655: 93.6441% ( 16) 00:17:04.721 2.655 - 2.667: 93.7727% ( 17) 00:17:04.721 2.667 - 2.679: 93.9165% ( 19) 00:17:04.721 2.679 - 2.690: 94.0829% ( 22) 00:17:04.721 2.690 - 2.702: 94.2343% ( 20) 00:17:04.721 2.702 - 2.714: 94.3705% ( 18) 00:17:04.721 2.714 - 2.726: 94.5218% ( 20) 00:17:04.721 2.726 - 2.738: 94.6580% ( 18) 00:17:04.721 2.738 - 2.750: 94.7639% ( 14) 00:17:04.721 2.750 - 2.761: 94.9380% ( 23) 00:17:04.721 2.761 - 2.773: 95.1196% ( 24) 00:17:04.721 2.773 - 2.785: 95.2482% ( 17) 00:17:04.721 2.785 - 2.797: 95.3768% ( 17) 00:17:04.721 2.797 - 2.809: 95.5433% ( 22) 00:17:04.721 2.809 - 2.821: 95.6568% ( 15) 00:17:04.721 2.821 - 2.833: 95.7324% ( 10) 00:17:04.721 2.833 - 2.844: 95.8308% ( 13) 00:17:04.721 2.844 - 2.856: 95.9140% ( 11) 00:17:04.721 2.856 - 2.868: 95.9973% ( 11) 00:17:04.721 2.868 - 2.880: 96.0956% ( 13) 00:17:04.721 2.880 - 2.892: 96.2394% ( 19) 00:17:04.721 2.892 - 2.904: 96.2848% ( 6) 00:17:04.721 2.904 - 2.916: 96.4059% ( 16) 00:17:04.721 2.916 - 2.927: 96.5194% ( 15) 00:17:04.721 2.927 - 2.939: 96.6177% ( 13) 00:17:04.721 2.939 - 2.951: 96.7312% ( 15) 00:17:04.721 2.951 - 2.963: 96.8220% ( 12) 00:17:04.721 2.963 - 2.975: 96.9128% ( 12) 00:17:04.721 2.975 - 2.987: 96.9961% ( 11) 00:17:04.721 2.987 - 2.999: 97.0717% ( 10) 00:17:04.721 2.999 - 3.010: 97.1096% ( 5) 00:17:04.721 3.010 - 3.022: 97.1928% ( 11) 00:17:04.721 3.022 - 3.034: 97.2231% ( 4) 00:17:04.721 3.034 - 3.058: 97.3214% ( 13) 00:17:04.721 3.058 - 3.081: 97.4198% ( 13) 00:17:04.721 3.081 - 3.105: 97.5106% ( 12) 00:17:04.721 3.105 - 3.129: 97.5636% ( 7) 00:17:04.721 3.129 - 3.153: 97.6392% ( 10) 00:17:04.721 3.153 - 3.176: 97.6846% ( 6) 00:17:04.721 3.176 - 3.200: 97.7452% ( 8) 00:17:04.721 3.200 - 3.224: 97.7754% ( 4) 00:17:04.721 3.224 - 3.247: 97.7906% ( 2) 00:17:04.721 3.247 - 3.271: 97.8511% ( 8) 00:17:04.721 3.271 - 3.295: 97.8889% ( 5) 00:17:04.721 3.295 - 3.319: 97.9495% ( 8) 00:17:04.722 3.319 - 3.342: 97.9722% ( 3) 00:17:04.722 3.342 - 3.366: 98.0024% ( 4) 00:17:04.722 3.366 - 3.390: 98.0327% ( 4) 00:17:04.722 3.390 - 3.413: 98.0554% ( 3) 00:17:04.722 3.413 - 3.437: 98.0705% ( 2) 00:17:04.722 3.437 - 3.461: 98.0932% ( 3) 00:17:04.722 3.461 - 3.484: 98.1008% ( 1) 00:17:04.722 3.484 - 3.508: 98.1084% ( 1) 00:17:04.722 3.508 - 3.532: 98.1159% ( 1) 00:17:04.722 3.532 - 3.556: 98.1311% ( 2) 00:17:04.722 3.579 - 3.603: 98.1538% ( 3) 00:17:04.722 3.603 - 3.627: 98.1765% ( 3) 00:17:04.722 3.627 - 3.650: 98.2067% ( 4) 00:17:04.722 3.650 - 3.674: 98.2219% ( 2) 00:17:04.722 3.674 - 3.698: 98.2294% ( 1) 00:17:04.722 3.698 - 3.721: 98.2446% ( 2) 00:17:04.722 3.769 - 3.793: 98.2824% ( 5) 00:17:04.722 3.793 - 3.816: 98.2900% ( 1) 00:17:04.722 3.816 - 3.840: 98.3051% ( 2) 00:17:04.722 3.840 - 3.864: 98.3127% ( 1) 00:17:04.722 3.864 - 3.887: 98.3278% ( 2) 00:17:04.722 3.911 - 3.935: 98.3354% ( 1) 00:17:04.722 3.935 - 3.959: 98.3429% ( 1) 00:17:04.722 3.959 - 3.982: 98.3505% ( 1) 00:17:04.722 3.982 - 4.006: 98.3581% ( 1) 00:17:04.722 4.006 - 4.030: 98.3656% ( 1) 00:17:04.722 4.030 - 4.053: 98.3732% ( 1) 00:17:04.722 4.053 - 4.077: 98.3808% ( 1) 00:17:04.722 4.077 - 4.101: 98.3883% ( 1) 00:17:04.722 4.124 - 4.148: 98.3959%[2024-07-26 18:17:30.817848] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:04.981 ( 1) 00:17:04.981 4.148 - 4.172: 98.4035% ( 1) 00:17:04.981 4.172 - 4.196: 98.4110% ( 1) 00:17:04.981 4.196 - 4.219: 98.4262% ( 2) 00:17:04.981 4.290 - 4.314: 98.4337% ( 1) 00:17:04.981 4.338 - 4.361: 98.4413% ( 1) 00:17:04.981 4.456 - 4.480: 98.4488% ( 1) 00:17:04.981 4.575 - 4.599: 98.4564% ( 1) 00:17:04.981 4.599 - 4.622: 98.4640% ( 1) 00:17:04.981 4.646 - 4.670: 98.4715% ( 1) 00:17:04.981 4.764 - 4.788: 98.4791% ( 1) 00:17:04.981 4.788 - 4.812: 98.4867% ( 1) 00:17:04.981 4.978 - 5.001: 98.4942% ( 1) 00:17:04.981 5.428 - 5.452: 98.5018% ( 1) 00:17:04.981 5.476 - 5.499: 98.5094% ( 1) 00:17:04.981 5.523 - 5.547: 98.5169% ( 1) 00:17:04.981 5.570 - 5.594: 98.5245% ( 1) 00:17:04.981 5.594 - 5.618: 98.5321% ( 1) 00:17:04.981 5.689 - 5.713: 98.5472% ( 2) 00:17:04.981 5.784 - 5.807: 98.5623% ( 2) 00:17:04.981 5.831 - 5.855: 98.5699% ( 1) 00:17:04.981 5.926 - 5.950: 98.5775% ( 1) 00:17:04.981 5.973 - 5.997: 98.5850% ( 1) 00:17:04.981 5.997 - 6.021: 98.5926% ( 1) 00:17:04.981 6.021 - 6.044: 98.6002% ( 1) 00:17:04.981 6.044 - 6.068: 98.6229% ( 3) 00:17:04.981 6.353 - 6.400: 98.6380% ( 2) 00:17:04.981 6.447 - 6.495: 98.6531% ( 2) 00:17:04.981 6.495 - 6.542: 98.6683% ( 2) 00:17:04.981 7.016 - 7.064: 98.6758% ( 1) 00:17:04.982 7.301 - 7.348: 98.6910% ( 2) 00:17:04.982 7.585 - 7.633: 98.6985% ( 1) 00:17:04.982 7.870 - 7.917: 98.7137% ( 2) 00:17:04.982 7.917 - 7.964: 98.7212% ( 1) 00:17:04.982 8.960 - 9.007: 98.7288% ( 1) 00:17:04.982 9.339 - 9.387: 98.7364% ( 1) 00:17:04.982 9.529 - 9.576: 98.7439% ( 1) 00:17:04.982 10.335 - 10.382: 98.7515% ( 1) 00:17:04.982 10.761 - 10.809: 98.7591% ( 1) 00:17:04.982 11.567 - 11.615: 98.7666% ( 1) 00:17:04.982 12.421 - 12.516: 98.7742% ( 1) 00:17:04.982 13.464 - 13.559: 98.7818% ( 1) 00:17:04.982 13.559 - 13.653: 98.7893% ( 1) 00:17:04.982 15.170 - 15.265: 98.7969% ( 1) 00:17:04.982 15.550 - 15.644: 98.8045% ( 1) 00:17:04.982 15.644 - 15.739: 98.8196% ( 2) 00:17:04.982 15.739 - 15.834: 98.8272% ( 1) 00:17:04.982 15.834 - 15.929: 98.8347% ( 1) 00:17:04.982 15.929 - 16.024: 98.8499% ( 2) 00:17:04.982 16.024 - 16.119: 98.9104% ( 8) 00:17:04.982 16.119 - 16.213: 98.9634% ( 7) 00:17:04.982 16.213 - 16.308: 99.0012% ( 5) 00:17:04.982 16.308 - 16.403: 99.0239% ( 3) 00:17:04.982 16.403 - 16.498: 99.1071% ( 11) 00:17:04.982 16.498 - 16.593: 99.1450% ( 5) 00:17:04.982 16.593 - 16.687: 99.1904% ( 6) 00:17:04.982 16.687 - 16.782: 99.2282% ( 5) 00:17:04.982 16.782 - 16.877: 99.2585% ( 4) 00:17:04.982 16.877 - 16.972: 99.2736% ( 2) 00:17:04.982 16.972 - 17.067: 99.2963% ( 3) 00:17:04.982 17.067 - 17.161: 99.3114% ( 2) 00:17:04.982 17.161 - 17.256: 99.3417% ( 4) 00:17:04.982 17.256 - 17.351: 99.3568% ( 2) 00:17:04.982 17.351 - 17.446: 99.3720% ( 2) 00:17:04.982 17.541 - 17.636: 99.3871% ( 2) 00:17:04.982 17.730 - 17.825: 99.3947% ( 1) 00:17:04.982 17.825 - 17.920: 99.4022% ( 1) 00:17:04.982 17.920 - 18.015: 99.4174% ( 2) 00:17:04.982 18.015 - 18.110: 99.4249% ( 1) 00:17:04.982 18.773 - 18.868: 99.4325% ( 1) 00:17:04.982 1025.517 - 1031.585: 99.4401% ( 1) 00:17:04.982 3980.705 - 4004.978: 99.8562% ( 55) 00:17:04.982 4004.978 - 4029.250: 99.9924% ( 18) 00:17:04.982 6990.507 - 7039.052: 100.0000% ( 1) 00:17:04.982 00:17:04.982 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:04.982 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:04.982 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:04.982 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:04.982 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:04.982 [ 00:17:04.982 { 00:17:04.982 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:04.982 "subtype": "Discovery", 00:17:04.982 "listen_addresses": [], 00:17:04.982 "allow_any_host": true, 00:17:04.982 "hosts": [] 00:17:04.982 }, 00:17:04.982 { 00:17:04.982 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:04.982 "subtype": "NVMe", 00:17:04.982 "listen_addresses": [ 00:17:04.982 { 00:17:04.982 "trtype": "VFIOUSER", 00:17:04.982 "adrfam": "IPv4", 00:17:04.982 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:04.982 "trsvcid": "0" 00:17:04.982 } 00:17:04.982 ], 00:17:04.982 "allow_any_host": true, 00:17:04.982 "hosts": [], 00:17:04.982 "serial_number": "SPDK1", 00:17:04.982 "model_number": "SPDK bdev Controller", 00:17:04.982 "max_namespaces": 32, 00:17:04.982 "min_cntlid": 1, 00:17:04.982 "max_cntlid": 65519, 00:17:04.982 "namespaces": [ 00:17:04.982 { 00:17:04.982 "nsid": 1, 00:17:04.982 "bdev_name": "Malloc1", 00:17:04.982 "name": "Malloc1", 00:17:04.982 "nguid": "8DEDA8888A0045BA9C08C0028C7BD123", 00:17:04.982 "uuid": "8deda888-8a00-45ba-9c08-c0028c7bd123" 00:17:04.982 } 00:17:04.982 ] 00:17:04.982 }, 00:17:04.982 { 00:17:04.982 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:04.982 "subtype": "NVMe", 00:17:04.982 "listen_addresses": [ 00:17:04.982 { 00:17:04.982 "trtype": "VFIOUSER", 00:17:04.982 "adrfam": "IPv4", 00:17:04.982 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:04.982 "trsvcid": "0" 00:17:04.982 } 00:17:04.982 ], 00:17:04.982 "allow_any_host": true, 00:17:04.982 "hosts": [], 00:17:04.982 "serial_number": "SPDK2", 00:17:04.982 "model_number": "SPDK bdev Controller", 00:17:04.982 "max_namespaces": 32, 00:17:04.982 "min_cntlid": 1, 00:17:04.982 "max_cntlid": 65519, 00:17:04.982 "namespaces": [ 00:17:04.982 { 00:17:04.982 "nsid": 1, 00:17:04.982 "bdev_name": "Malloc2", 00:17:04.982 "name": "Malloc2", 00:17:04.982 "nguid": "30C6EFB699A140AFA36E9092BAD1F4B1", 00:17:04.982 "uuid": "30c6efb6-99a1-40af-a36e-9092bad1f4b1" 00:17:04.982 } 00:17:04.982 ] 00:17:04.982 } 00:17:04.982 ] 00:17:04.982 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:04.982 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1460720 00:17:04.982 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:04.982 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:04.982 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:04.982 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:04.982 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:04.982 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:04.982 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:04.982 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:05.242 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.242 [2024-07-26 18:17:31.270514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:05.502 Malloc3 00:17:05.502 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:05.502 [2024-07-26 18:17:31.639172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:05.761 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:05.761 Asynchronous Event Request test 00:17:05.761 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:05.761 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:05.761 Registering asynchronous event callbacks... 00:17:05.761 Starting namespace attribute notice tests for all controllers... 00:17:05.761 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:05.761 aer_cb - Changed Namespace 00:17:05.761 Cleaning up... 00:17:05.761 [ 00:17:05.761 { 00:17:05.761 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:05.761 "subtype": "Discovery", 00:17:05.761 "listen_addresses": [], 00:17:05.761 "allow_any_host": true, 00:17:05.761 "hosts": [] 00:17:05.761 }, 00:17:05.762 { 00:17:05.762 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:05.762 "subtype": "NVMe", 00:17:05.762 "listen_addresses": [ 00:17:05.762 { 00:17:05.762 "trtype": "VFIOUSER", 00:17:05.762 "adrfam": "IPv4", 00:17:05.762 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:05.762 "trsvcid": "0" 00:17:05.762 } 00:17:05.762 ], 00:17:05.762 "allow_any_host": true, 00:17:05.762 "hosts": [], 00:17:05.762 "serial_number": "SPDK1", 00:17:05.762 "model_number": "SPDK bdev Controller", 00:17:05.762 "max_namespaces": 32, 00:17:05.762 "min_cntlid": 1, 00:17:05.762 "max_cntlid": 65519, 00:17:05.762 "namespaces": [ 00:17:05.762 { 00:17:05.762 "nsid": 1, 00:17:05.762 "bdev_name": "Malloc1", 00:17:05.762 "name": "Malloc1", 00:17:05.762 "nguid": "8DEDA8888A0045BA9C08C0028C7BD123", 00:17:05.762 "uuid": "8deda888-8a00-45ba-9c08-c0028c7bd123" 00:17:05.762 }, 00:17:05.762 { 00:17:05.762 "nsid": 2, 00:17:05.762 "bdev_name": "Malloc3", 00:17:05.762 "name": "Malloc3", 00:17:05.762 "nguid": "14E4227F11664F388A52038154C3FD04", 00:17:05.762 "uuid": "14e4227f-1166-4f38-8a52-038154c3fd04" 00:17:05.762 } 00:17:05.762 ] 00:17:05.762 }, 00:17:05.762 { 00:17:05.762 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:05.762 "subtype": "NVMe", 00:17:05.762 "listen_addresses": [ 00:17:05.762 { 00:17:05.762 "trtype": "VFIOUSER", 00:17:05.762 "adrfam": "IPv4", 00:17:05.762 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:05.762 "trsvcid": "0" 00:17:05.762 } 00:17:05.762 ], 00:17:05.762 "allow_any_host": true, 00:17:05.762 "hosts": [], 00:17:05.762 "serial_number": "SPDK2", 00:17:05.762 "model_number": "SPDK bdev Controller", 00:17:05.762 "max_namespaces": 32, 00:17:05.762 "min_cntlid": 1, 00:17:05.762 "max_cntlid": 65519, 00:17:05.762 "namespaces": [ 00:17:05.762 { 00:17:05.762 "nsid": 1, 00:17:05.762 "bdev_name": "Malloc2", 00:17:05.762 "name": "Malloc2", 00:17:05.762 "nguid": "30C6EFB699A140AFA36E9092BAD1F4B1", 00:17:05.762 "uuid": "30c6efb6-99a1-40af-a36e-9092bad1f4b1" 00:17:05.762 } 00:17:05.762 ] 00:17:05.762 } 00:17:05.762 ] 00:17:05.762 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1460720 00:17:05.762 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:05.762 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:05.762 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:05.762 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:06.023 [2024-07-26 18:17:31.918788] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:17:06.023 [2024-07-26 18:17:31.918827] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460821 ] 00:17:06.023 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.023 [2024-07-26 18:17:31.936589] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:06.023 [2024-07-26 18:17:31.954056] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:06.023 [2024-07-26 18:17:31.964152] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:06.023 [2024-07-26 18:17:31.964186] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2d5a735000 00:17:06.023 [2024-07-26 18:17:31.965161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:06.023 [2024-07-26 18:17:31.966161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:06.023 [2024-07-26 18:17:31.967172] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:06.023 [2024-07-26 18:17:31.968184] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:06.023 [2024-07-26 18:17:31.969190] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:06.023 [2024-07-26 18:17:31.970193] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:06.023 [2024-07-26 18:17:31.971203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:06.023 [2024-07-26 18:17:31.972210] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:06.023 [2024-07-26 18:17:31.973218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:06.023 [2024-07-26 18:17:31.973239] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2d594f7000 00:17:06.023 [2024-07-26 18:17:31.974337] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:06.023 [2024-07-26 18:17:31.988728] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:06.023 [2024-07-26 18:17:31.988760] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:06.023 [2024-07-26 18:17:31.993858] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:06.023 [2024-07-26 18:17:31.993908] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:06.023 [2024-07-26 18:17:31.993992] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:06.023 [2024-07-26 18:17:31.994013] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:06.023 [2024-07-26 18:17:31.994023] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:06.023 [2024-07-26 18:17:31.994865] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:06.023 [2024-07-26 18:17:31.994888] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:06.023 [2024-07-26 18:17:31.994901] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:06.023 [2024-07-26 18:17:31.995866] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:06.023 [2024-07-26 18:17:31.995885] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:06.023 [2024-07-26 18:17:31.995899] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:06.023 [2024-07-26 18:17:31.996875] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:06.023 [2024-07-26 18:17:31.996895] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:06.023 [2024-07-26 18:17:31.997875] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:06.023 [2024-07-26 18:17:31.997900] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:06.023 [2024-07-26 18:17:31.997910] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:06.023 [2024-07-26 18:17:31.997921] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:06.023 [2024-07-26 18:17:31.998030] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:06.023 [2024-07-26 18:17:31.998039] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:06.023 [2024-07-26 18:17:31.998047] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:06.023 [2024-07-26 18:17:31.998884] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:06.023 [2024-07-26 18:17:31.999886] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:06.023 [2024-07-26 18:17:32.000895] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:06.023 [2024-07-26 18:17:32.001891] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:06.023 [2024-07-26 18:17:32.001970] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:06.023 [2024-07-26 18:17:32.002908] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:06.023 [2024-07-26 18:17:32.002927] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:06.023 [2024-07-26 18:17:32.002936] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:06.023 [2024-07-26 18:17:32.002960] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:06.023 [2024-07-26 18:17:32.002973] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:06.023 [2024-07-26 18:17:32.002992] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:06.023 [2024-07-26 18:17:32.003001] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:06.023 [2024-07-26 18:17:32.003008] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.023 [2024-07-26 18:17:32.003025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:06.023 [2024-07-26 18:17:32.011073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:06.023 [2024-07-26 18:17:32.011096] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:06.024 [2024-07-26 18:17:32.011105] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:06.024 [2024-07-26 18:17:32.011113] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:06.024 [2024-07-26 18:17:32.011121] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:06.024 [2024-07-26 18:17:32.011129] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:06.024 [2024-07-26 18:17:32.011142] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:06.024 [2024-07-26 18:17:32.011151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.011164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.011184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:06.024 [2024-07-26 18:17:32.019084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:06.024 [2024-07-26 18:17:32.019112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.024 [2024-07-26 18:17:32.019126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.024 [2024-07-26 18:17:32.019139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.024 [2024-07-26 18:17:32.019151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.024 [2024-07-26 18:17:32.019160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.019176] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.019191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:06.024 [2024-07-26 18:17:32.027071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:06.024 [2024-07-26 18:17:32.027088] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:06.024 [2024-07-26 18:17:32.027097] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.027113] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.027124] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.027137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:06.024 [2024-07-26 18:17:32.035069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:06.024 [2024-07-26 18:17:32.035143] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.035160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.035173] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:06.024 [2024-07-26 18:17:32.035181] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:06.024 [2024-07-26 18:17:32.035188] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.024 [2024-07-26 18:17:32.035198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:06.024 [2024-07-26 18:17:32.043072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:06.024 [2024-07-26 18:17:32.043094] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:06.024 [2024-07-26 18:17:32.043121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.043135] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.043148] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:06.024 [2024-07-26 18:17:32.043156] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:06.024 [2024-07-26 18:17:32.043162] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.024 [2024-07-26 18:17:32.043173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:06.024 [2024-07-26 18:17:32.051073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:06.024 [2024-07-26 18:17:32.051099] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.051115] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.051128] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:06.024 [2024-07-26 18:17:32.051136] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:06.024 [2024-07-26 18:17:32.051143] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.024 [2024-07-26 18:17:32.051153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:06.024 [2024-07-26 18:17:32.059072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:06.024 [2024-07-26 18:17:32.059092] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.059105] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.059118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.059134] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.059143] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.059151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.059160] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:06.024 [2024-07-26 18:17:32.059168] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:06.024 [2024-07-26 18:17:32.059176] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:06.024 [2024-07-26 18:17:32.059204] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:06.024 [2024-07-26 18:17:32.067070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:06.024 [2024-07-26 18:17:32.067096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:06.024 [2024-07-26 18:17:32.075070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:06.024 [2024-07-26 18:17:32.075095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:06.025 [2024-07-26 18:17:32.083071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:06.025 [2024-07-26 18:17:32.083095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:06.025 [2024-07-26 18:17:32.091070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:06.025 [2024-07-26 18:17:32.091100] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:06.025 [2024-07-26 18:17:32.091126] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:06.025 [2024-07-26 18:17:32.091133] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:06.025 [2024-07-26 18:17:32.091139] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:06.025 [2024-07-26 18:17:32.091146] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:06.025 [2024-07-26 18:17:32.091156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:06.025 [2024-07-26 18:17:32.091169] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:06.025 [2024-07-26 18:17:32.091178] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:06.025 [2024-07-26 18:17:32.091184] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.025 [2024-07-26 18:17:32.091194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:06.025 [2024-07-26 18:17:32.091205] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:06.025 [2024-07-26 18:17:32.091214] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:06.025 [2024-07-26 18:17:32.091220] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.025 [2024-07-26 18:17:32.091230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:06.025 [2024-07-26 18:17:32.091242] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:06.025 [2024-07-26 18:17:32.091250] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:06.025 [2024-07-26 18:17:32.091257] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.025 [2024-07-26 18:17:32.091266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:06.025 [2024-07-26 18:17:32.099072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:06.025 [2024-07-26 18:17:32.099099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:06.025 [2024-07-26 18:17:32.099116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:06.025 [2024-07-26 18:17:32.099131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:06.025 ===================================================== 00:17:06.025 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:06.025 ===================================================== 00:17:06.025 Controller Capabilities/Features 00:17:06.025 ================================ 00:17:06.025 Vendor ID: 4e58 00:17:06.025 Subsystem Vendor ID: 4e58 00:17:06.025 Serial Number: SPDK2 00:17:06.025 Model Number: SPDK bdev Controller 00:17:06.025 Firmware Version: 24.09 00:17:06.025 Recommended Arb Burst: 6 00:17:06.025 IEEE OUI Identifier: 8d 6b 50 00:17:06.025 Multi-path I/O 00:17:06.025 May have multiple subsystem ports: Yes 00:17:06.025 May have multiple controllers: Yes 00:17:06.025 Associated with SR-IOV VF: No 00:17:06.025 Max Data Transfer Size: 131072 00:17:06.025 Max Number of Namespaces: 32 00:17:06.025 Max Number of I/O Queues: 127 00:17:06.025 NVMe Specification Version (VS): 1.3 00:17:06.025 NVMe Specification Version (Identify): 1.3 00:17:06.025 Maximum Queue Entries: 256 00:17:06.025 Contiguous Queues Required: Yes 00:17:06.025 Arbitration Mechanisms Supported 00:17:06.025 Weighted Round Robin: Not Supported 00:17:06.025 Vendor Specific: Not Supported 00:17:06.025 Reset Timeout: 15000 ms 00:17:06.025 Doorbell Stride: 4 bytes 00:17:06.025 NVM Subsystem Reset: Not Supported 00:17:06.025 Command Sets Supported 00:17:06.025 NVM Command Set: Supported 00:17:06.025 Boot Partition: Not Supported 00:17:06.025 Memory Page Size Minimum: 4096 bytes 00:17:06.025 Memory Page Size Maximum: 4096 bytes 00:17:06.025 Persistent Memory Region: Not Supported 00:17:06.025 Optional Asynchronous Events Supported 00:17:06.025 Namespace Attribute Notices: Supported 00:17:06.025 Firmware Activation Notices: Not Supported 00:17:06.025 ANA Change Notices: Not Supported 00:17:06.025 PLE Aggregate Log Change Notices: Not Supported 00:17:06.025 LBA Status Info Alert Notices: Not Supported 00:17:06.025 EGE Aggregate Log Change Notices: Not Supported 00:17:06.025 Normal NVM Subsystem Shutdown event: Not Supported 00:17:06.025 Zone Descriptor Change Notices: Not Supported 00:17:06.025 Discovery Log Change Notices: Not Supported 00:17:06.025 Controller Attributes 00:17:06.025 128-bit Host Identifier: Supported 00:17:06.025 Non-Operational Permissive Mode: Not Supported 00:17:06.025 NVM Sets: Not Supported 00:17:06.025 Read Recovery Levels: Not Supported 00:17:06.025 Endurance Groups: Not Supported 00:17:06.025 Predictable Latency Mode: Not Supported 00:17:06.025 Traffic Based Keep ALive: Not Supported 00:17:06.025 Namespace Granularity: Not Supported 00:17:06.025 SQ Associations: Not Supported 00:17:06.025 UUID List: Not Supported 00:17:06.025 Multi-Domain Subsystem: Not Supported 00:17:06.025 Fixed Capacity Management: Not Supported 00:17:06.025 Variable Capacity Management: Not Supported 00:17:06.025 Delete Endurance Group: Not Supported 00:17:06.025 Delete NVM Set: Not Supported 00:17:06.025 Extended LBA Formats Supported: Not Supported 00:17:06.025 Flexible Data Placement Supported: Not Supported 00:17:06.025 00:17:06.025 Controller Memory Buffer Support 00:17:06.025 ================================ 00:17:06.026 Supported: No 00:17:06.026 00:17:06.026 Persistent Memory Region Support 00:17:06.026 ================================ 00:17:06.026 Supported: No 00:17:06.026 00:17:06.026 Admin Command Set Attributes 00:17:06.026 ============================ 00:17:06.026 Security Send/Receive: Not Supported 00:17:06.026 Format NVM: Not Supported 00:17:06.026 Firmware Activate/Download: Not Supported 00:17:06.026 Namespace Management: Not Supported 00:17:06.026 Device Self-Test: Not Supported 00:17:06.026 Directives: Not Supported 00:17:06.026 NVMe-MI: Not Supported 00:17:06.026 Virtualization Management: Not Supported 00:17:06.026 Doorbell Buffer Config: Not Supported 00:17:06.026 Get LBA Status Capability: Not Supported 00:17:06.026 Command & Feature Lockdown Capability: Not Supported 00:17:06.026 Abort Command Limit: 4 00:17:06.026 Async Event Request Limit: 4 00:17:06.026 Number of Firmware Slots: N/A 00:17:06.026 Firmware Slot 1 Read-Only: N/A 00:17:06.026 Firmware Activation Without Reset: N/A 00:17:06.026 Multiple Update Detection Support: N/A 00:17:06.026 Firmware Update Granularity: No Information Provided 00:17:06.026 Per-Namespace SMART Log: No 00:17:06.026 Asymmetric Namespace Access Log Page: Not Supported 00:17:06.026 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:06.026 Command Effects Log Page: Supported 00:17:06.026 Get Log Page Extended Data: Supported 00:17:06.026 Telemetry Log Pages: Not Supported 00:17:06.026 Persistent Event Log Pages: Not Supported 00:17:06.026 Supported Log Pages Log Page: May Support 00:17:06.026 Commands Supported & Effects Log Page: Not Supported 00:17:06.026 Feature Identifiers & Effects Log Page:May Support 00:17:06.026 NVMe-MI Commands & Effects Log Page: May Support 00:17:06.026 Data Area 4 for Telemetry Log: Not Supported 00:17:06.026 Error Log Page Entries Supported: 128 00:17:06.026 Keep Alive: Supported 00:17:06.026 Keep Alive Granularity: 10000 ms 00:17:06.026 00:17:06.026 NVM Command Set Attributes 00:17:06.026 ========================== 00:17:06.026 Submission Queue Entry Size 00:17:06.026 Max: 64 00:17:06.026 Min: 64 00:17:06.026 Completion Queue Entry Size 00:17:06.026 Max: 16 00:17:06.026 Min: 16 00:17:06.026 Number of Namespaces: 32 00:17:06.026 Compare Command: Supported 00:17:06.026 Write Uncorrectable Command: Not Supported 00:17:06.026 Dataset Management Command: Supported 00:17:06.026 Write Zeroes Command: Supported 00:17:06.026 Set Features Save Field: Not Supported 00:17:06.026 Reservations: Not Supported 00:17:06.026 Timestamp: Not Supported 00:17:06.026 Copy: Supported 00:17:06.026 Volatile Write Cache: Present 00:17:06.026 Atomic Write Unit (Normal): 1 00:17:06.026 Atomic Write Unit (PFail): 1 00:17:06.026 Atomic Compare & Write Unit: 1 00:17:06.026 Fused Compare & Write: Supported 00:17:06.026 Scatter-Gather List 00:17:06.026 SGL Command Set: Supported (Dword aligned) 00:17:06.026 SGL Keyed: Not Supported 00:17:06.026 SGL Bit Bucket Descriptor: Not Supported 00:17:06.026 SGL Metadata Pointer: Not Supported 00:17:06.026 Oversized SGL: Not Supported 00:17:06.026 SGL Metadata Address: Not Supported 00:17:06.026 SGL Offset: Not Supported 00:17:06.026 Transport SGL Data Block: Not Supported 00:17:06.026 Replay Protected Memory Block: Not Supported 00:17:06.026 00:17:06.026 Firmware Slot Information 00:17:06.026 ========================= 00:17:06.026 Active slot: 1 00:17:06.026 Slot 1 Firmware Revision: 24.09 00:17:06.026 00:17:06.026 00:17:06.026 Commands Supported and Effects 00:17:06.026 ============================== 00:17:06.026 Admin Commands 00:17:06.026 -------------- 00:17:06.026 Get Log Page (02h): Supported 00:17:06.026 Identify (06h): Supported 00:17:06.026 Abort (08h): Supported 00:17:06.026 Set Features (09h): Supported 00:17:06.026 Get Features (0Ah): Supported 00:17:06.026 Asynchronous Event Request (0Ch): Supported 00:17:06.026 Keep Alive (18h): Supported 00:17:06.026 I/O Commands 00:17:06.026 ------------ 00:17:06.026 Flush (00h): Supported LBA-Change 00:17:06.026 Write (01h): Supported LBA-Change 00:17:06.026 Read (02h): Supported 00:17:06.026 Compare (05h): Supported 00:17:06.026 Write Zeroes (08h): Supported LBA-Change 00:17:06.026 Dataset Management (09h): Supported LBA-Change 00:17:06.026 Copy (19h): Supported LBA-Change 00:17:06.026 00:17:06.026 Error Log 00:17:06.026 ========= 00:17:06.026 00:17:06.026 Arbitration 00:17:06.026 =========== 00:17:06.026 Arbitration Burst: 1 00:17:06.026 00:17:06.026 Power Management 00:17:06.026 ================ 00:17:06.026 Number of Power States: 1 00:17:06.026 Current Power State: Power State #0 00:17:06.026 Power State #0: 00:17:06.026 Max Power: 0.00 W 00:17:06.027 Non-Operational State: Operational 00:17:06.027 Entry Latency: Not Reported 00:17:06.027 Exit Latency: Not Reported 00:17:06.027 Relative Read Throughput: 0 00:17:06.027 Relative Read Latency: 0 00:17:06.027 Relative Write Throughput: 0 00:17:06.027 Relative Write Latency: 0 00:17:06.027 Idle Power: Not Reported 00:17:06.027 Active Power: Not Reported 00:17:06.027 Non-Operational Permissive Mode: Not Supported 00:17:06.027 00:17:06.027 Health Information 00:17:06.027 ================== 00:17:06.027 Critical Warnings: 00:17:06.027 Available Spare Space: OK 00:17:06.027 Temperature: OK 00:17:06.027 Device Reliability: OK 00:17:06.027 Read Only: No 00:17:06.027 Volatile Memory Backup: OK 00:17:06.027 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:06.027 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:06.027 Available Spare: 0% 00:17:06.027 Available Sp[2024-07-26 18:17:32.099245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:06.027 [2024-07-26 18:17:32.107070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:06.027 [2024-07-26 18:17:32.107133] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:06.027 [2024-07-26 18:17:32.107162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.027 [2024-07-26 18:17:32.107174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.027 [2024-07-26 18:17:32.107185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.027 [2024-07-26 18:17:32.107195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.027 [2024-07-26 18:17:32.107265] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:06.027 [2024-07-26 18:17:32.107286] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:06.027 [2024-07-26 18:17:32.108264] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:06.027 [2024-07-26 18:17:32.108335] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:06.027 [2024-07-26 18:17:32.108365] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:06.027 [2024-07-26 18:17:32.109267] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:06.027 [2024-07-26 18:17:32.109291] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:06.027 [2024-07-26 18:17:32.109343] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:06.027 [2024-07-26 18:17:32.110536] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:06.027 are Threshold: 0% 00:17:06.027 Life Percentage Used: 0% 00:17:06.027 Data Units Read: 0 00:17:06.027 Data Units Written: 0 00:17:06.027 Host Read Commands: 0 00:17:06.027 Host Write Commands: 0 00:17:06.027 Controller Busy Time: 0 minutes 00:17:06.027 Power Cycles: 0 00:17:06.027 Power On Hours: 0 hours 00:17:06.027 Unsafe Shutdowns: 0 00:17:06.027 Unrecoverable Media Errors: 0 00:17:06.027 Lifetime Error Log Entries: 0 00:17:06.027 Warning Temperature Time: 0 minutes 00:17:06.027 Critical Temperature Time: 0 minutes 00:17:06.027 00:17:06.027 Number of Queues 00:17:06.027 ================ 00:17:06.027 Number of I/O Submission Queues: 127 00:17:06.027 Number of I/O Completion Queues: 127 00:17:06.027 00:17:06.027 Active Namespaces 00:17:06.027 ================= 00:17:06.027 Namespace ID:1 00:17:06.027 Error Recovery Timeout: Unlimited 00:17:06.027 Command Set Identifier: NVM (00h) 00:17:06.027 Deallocate: Supported 00:17:06.027 Deallocated/Unwritten Error: Not Supported 00:17:06.027 Deallocated Read Value: Unknown 00:17:06.027 Deallocate in Write Zeroes: Not Supported 00:17:06.027 Deallocated Guard Field: 0xFFFF 00:17:06.027 Flush: Supported 00:17:06.027 Reservation: Supported 00:17:06.027 Namespace Sharing Capabilities: Multiple Controllers 00:17:06.027 Size (in LBAs): 131072 (0GiB) 00:17:06.027 Capacity (in LBAs): 131072 (0GiB) 00:17:06.027 Utilization (in LBAs): 131072 (0GiB) 00:17:06.027 NGUID: 30C6EFB699A140AFA36E9092BAD1F4B1 00:17:06.027 UUID: 30c6efb6-99a1-40af-a36e-9092bad1f4b1 00:17:06.027 Thin Provisioning: Not Supported 00:17:06.027 Per-NS Atomic Units: Yes 00:17:06.027 Atomic Boundary Size (Normal): 0 00:17:06.027 Atomic Boundary Size (PFail): 0 00:17:06.027 Atomic Boundary Offset: 0 00:17:06.027 Maximum Single Source Range Length: 65535 00:17:06.027 Maximum Copy Length: 65535 00:17:06.027 Maximum Source Range Count: 1 00:17:06.027 NGUID/EUI64 Never Reused: No 00:17:06.027 Namespace Write Protected: No 00:17:06.027 Number of LBA Formats: 1 00:17:06.027 Current LBA Format: LBA Format #00 00:17:06.027 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:06.027 00:17:06.027 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:06.286 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.286 [2024-07-26 18:17:32.340892] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:11.579 Initializing NVMe Controllers 00:17:11.579 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:11.579 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:11.579 Initialization complete. Launching workers. 00:17:11.579 ======================================================== 00:17:11.580 Latency(us) 00:17:11.580 Device Information : IOPS MiB/s Average min max 00:17:11.580 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34251.82 133.80 3736.17 1157.46 7454.54 00:17:11.580 ======================================================== 00:17:11.580 Total : 34251.82 133.80 3736.17 1157.46 7454.54 00:17:11.580 00:17:11.580 [2024-07-26 18:17:37.443441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:11.580 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:11.580 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.580 [2024-07-26 18:17:37.686139] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:16.878 Initializing NVMe Controllers 00:17:16.878 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:16.878 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:16.878 Initialization complete. Launching workers. 00:17:16.878 ======================================================== 00:17:16.878 Latency(us) 00:17:16.878 Device Information : IOPS MiB/s Average min max 00:17:16.878 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31445.28 122.83 4070.05 1212.81 11346.27 00:17:16.878 ======================================================== 00:17:16.878 Total : 31445.28 122.83 4070.05 1212.81 11346.27 00:17:16.878 00:17:16.878 [2024-07-26 18:17:42.708640] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:16.878 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:16.878 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.878 [2024-07-26 18:17:42.920856] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:22.156 [2024-07-26 18:17:48.045219] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:22.156 Initializing NVMe Controllers 00:17:22.156 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:22.156 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:22.156 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:22.157 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:22.157 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:22.157 Initialization complete. Launching workers. 00:17:22.157 Starting thread on core 2 00:17:22.157 Starting thread on core 3 00:17:22.157 Starting thread on core 1 00:17:22.157 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:22.157 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.416 [2024-07-26 18:17:48.346427] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:25.708 [2024-07-26 18:17:51.401999] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:25.708 Initializing NVMe Controllers 00:17:25.708 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:25.708 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:25.708 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:25.708 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:25.708 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:25.708 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:25.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:25.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:25.708 Initialization complete. Launching workers. 00:17:25.708 Starting thread on core 1 with urgent priority queue 00:17:25.708 Starting thread on core 2 with urgent priority queue 00:17:25.708 Starting thread on core 3 with urgent priority queue 00:17:25.708 Starting thread on core 0 with urgent priority queue 00:17:25.708 SPDK bdev Controller (SPDK2 ) core 0: 5169.33 IO/s 19.34 secs/100000 ios 00:17:25.708 SPDK bdev Controller (SPDK2 ) core 1: 6086.33 IO/s 16.43 secs/100000 ios 00:17:25.708 SPDK bdev Controller (SPDK2 ) core 2: 6138.00 IO/s 16.29 secs/100000 ios 00:17:25.708 SPDK bdev Controller (SPDK2 ) core 3: 5655.00 IO/s 17.68 secs/100000 ios 00:17:25.708 ======================================================== 00:17:25.708 00:17:25.708 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:25.708 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.708 [2024-07-26 18:17:51.706577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:25.708 Initializing NVMe Controllers 00:17:25.708 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:25.708 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:25.708 Namespace ID: 1 size: 0GB 00:17:25.708 Initialization complete. 00:17:25.708 INFO: using host memory buffer for IO 00:17:25.708 Hello world! 00:17:25.708 [2024-07-26 18:17:51.715637] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:25.708 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:25.708 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.969 [2024-07-26 18:17:52.018838] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:27.346 Initializing NVMe Controllers 00:17:27.346 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:27.346 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:27.346 Initialization complete. Launching workers. 00:17:27.346 submit (in ns) avg, min, max = 6976.3, 3544.4, 4015695.6 00:17:27.346 complete (in ns) avg, min, max = 26351.7, 2064.4, 4015525.6 00:17:27.346 00:17:27.346 Submit histogram 00:17:27.346 ================ 00:17:27.346 Range in us Cumulative Count 00:17:27.346 3.532 - 3.556: 0.2789% ( 37) 00:17:27.346 3.556 - 3.579: 1.6736% ( 185) 00:17:27.346 3.579 - 3.603: 4.0256% ( 312) 00:17:27.346 3.603 - 3.627: 9.6871% ( 751) 00:17:27.346 3.627 - 3.650: 18.2661% ( 1138) 00:17:27.346 3.650 - 3.674: 28.1870% ( 1316) 00:17:27.346 3.674 - 3.698: 36.6076% ( 1117) 00:17:27.346 3.698 - 3.721: 43.8598% ( 962) 00:17:27.346 3.721 - 3.745: 48.7674% ( 651) 00:17:27.346 3.745 - 3.769: 54.0822% ( 705) 00:17:27.346 3.769 - 3.793: 58.4546% ( 580) 00:17:27.346 3.793 - 3.816: 62.3596% ( 518) 00:17:27.346 3.816 - 3.840: 65.8726% ( 466) 00:17:27.346 3.840 - 3.864: 69.3931% ( 467) 00:17:27.346 3.864 - 3.887: 73.2454% ( 511) 00:17:27.346 3.887 - 3.911: 77.4218% ( 554) 00:17:27.346 3.911 - 3.935: 81.7113% ( 569) 00:17:27.346 3.935 - 3.959: 84.7192% ( 399) 00:17:27.346 3.959 - 3.982: 87.1692% ( 325) 00:17:27.346 3.982 - 4.006: 88.9408% ( 235) 00:17:27.346 4.006 - 4.030: 90.5692% ( 216) 00:17:27.346 4.030 - 4.053: 92.0241% ( 193) 00:17:27.346 4.053 - 4.077: 93.1248% ( 146) 00:17:27.346 4.077 - 4.101: 93.9691% ( 112) 00:17:27.346 4.101 - 4.124: 94.8888% ( 122) 00:17:27.346 4.124 - 4.148: 95.5597% ( 89) 00:17:27.346 4.148 - 4.172: 96.1176% ( 74) 00:17:27.346 4.172 - 4.196: 96.4418% ( 43) 00:17:27.346 4.196 - 4.219: 96.7961% ( 47) 00:17:27.346 4.219 - 4.243: 97.0147% ( 29) 00:17:27.346 4.243 - 4.267: 97.1956% ( 24) 00:17:27.346 4.267 - 4.290: 97.2936% ( 13) 00:17:27.346 4.290 - 4.314: 97.3992% ( 14) 00:17:27.346 4.314 - 4.338: 97.4972% ( 13) 00:17:27.346 4.338 - 4.361: 97.5499% ( 7) 00:17:27.346 4.361 - 4.385: 97.6253% ( 10) 00:17:27.346 4.385 - 4.409: 97.6706% ( 6) 00:17:27.346 4.409 - 4.433: 97.7007% ( 4) 00:17:27.346 4.433 - 4.456: 97.7158% ( 2) 00:17:27.346 4.456 - 4.480: 97.7384% ( 3) 00:17:27.346 4.480 - 4.504: 97.7459% ( 1) 00:17:27.346 4.504 - 4.527: 97.7535% ( 1) 00:17:27.346 4.527 - 4.551: 97.7610% ( 1) 00:17:27.346 4.551 - 4.575: 97.7761% ( 2) 00:17:27.346 4.575 - 4.599: 97.7836% ( 1) 00:17:27.346 4.741 - 4.764: 97.7912% ( 1) 00:17:27.346 4.764 - 4.788: 97.7987% ( 1) 00:17:27.346 4.812 - 4.836: 97.8063% ( 1) 00:17:27.346 4.836 - 4.859: 97.8364% ( 4) 00:17:27.346 4.859 - 4.883: 97.8515% ( 2) 00:17:27.346 4.883 - 4.907: 97.8741% ( 3) 00:17:27.346 4.907 - 4.930: 97.9269% ( 7) 00:17:27.346 4.930 - 4.954: 97.9646% ( 5) 00:17:27.346 4.954 - 4.978: 98.0173% ( 7) 00:17:27.346 4.978 - 5.001: 98.0626% ( 6) 00:17:27.346 5.001 - 5.025: 98.1304% ( 9) 00:17:27.346 5.025 - 5.049: 98.1983% ( 9) 00:17:27.346 5.049 - 5.073: 98.2510% ( 7) 00:17:27.346 5.073 - 5.096: 98.2586% ( 1) 00:17:27.346 5.096 - 5.120: 98.2887% ( 4) 00:17:27.346 5.120 - 5.144: 98.3264% ( 5) 00:17:27.346 5.144 - 5.167: 98.3641% ( 5) 00:17:27.346 5.167 - 5.191: 98.3867% ( 3) 00:17:27.346 5.191 - 5.215: 98.4395% ( 7) 00:17:27.346 5.239 - 5.262: 98.4697% ( 4) 00:17:27.346 5.262 - 5.286: 98.4923% ( 3) 00:17:27.346 5.286 - 5.310: 98.4998% ( 1) 00:17:27.346 5.310 - 5.333: 98.5074% ( 1) 00:17:27.346 5.333 - 5.357: 98.5149% ( 1) 00:17:27.346 5.357 - 5.381: 98.5224% ( 1) 00:17:27.346 5.404 - 5.428: 98.5375% ( 2) 00:17:27.346 5.476 - 5.499: 98.5450% ( 1) 00:17:27.346 5.547 - 5.570: 98.5526% ( 1) 00:17:27.346 5.641 - 5.665: 98.5601% ( 1) 00:17:27.346 5.713 - 5.736: 98.5677% ( 1) 00:17:27.346 5.807 - 5.831: 98.5752% ( 1) 00:17:27.346 6.044 - 6.068: 98.5827% ( 1) 00:17:27.346 6.305 - 6.353: 98.5903% ( 1) 00:17:27.346 6.400 - 6.447: 98.5978% ( 1) 00:17:27.346 6.637 - 6.684: 98.6054% ( 1) 00:17:27.346 6.921 - 6.969: 98.6204% ( 2) 00:17:27.346 7.016 - 7.064: 98.6280% ( 1) 00:17:27.346 7.111 - 7.159: 98.6355% ( 1) 00:17:27.346 7.206 - 7.253: 98.6430% ( 1) 00:17:27.346 7.253 - 7.301: 98.6506% ( 1) 00:17:27.346 7.301 - 7.348: 98.6581% ( 1) 00:17:27.346 7.538 - 7.585: 98.6657% ( 1) 00:17:27.346 7.633 - 7.680: 98.6732% ( 1) 00:17:27.346 7.680 - 7.727: 98.6807% ( 1) 00:17:27.346 7.822 - 7.870: 98.7184% ( 5) 00:17:27.346 8.012 - 8.059: 98.7335% ( 2) 00:17:27.346 8.059 - 8.107: 98.7410% ( 1) 00:17:27.346 8.107 - 8.154: 98.7712% ( 4) 00:17:27.346 8.154 - 8.201: 98.7863% ( 2) 00:17:27.346 8.201 - 8.249: 98.7938% ( 1) 00:17:27.346 8.249 - 8.296: 98.8014% ( 1) 00:17:27.346 8.296 - 8.344: 98.8089% ( 1) 00:17:27.346 8.628 - 8.676: 98.8164% ( 1) 00:17:27.346 8.770 - 8.818: 98.8315% ( 2) 00:17:27.346 8.818 - 8.865: 98.8391% ( 1) 00:17:27.346 8.913 - 8.960: 98.8466% ( 1) 00:17:27.346 9.150 - 9.197: 98.8541% ( 1) 00:17:27.346 9.197 - 9.244: 98.8692% ( 2) 00:17:27.346 9.481 - 9.529: 98.8767% ( 1) 00:17:27.346 11.330 - 11.378: 98.8843% ( 1) 00:17:27.346 11.473 - 11.520: 98.8918% ( 1) 00:17:27.346 12.041 - 12.089: 98.8994% ( 1) 00:17:27.346 13.084 - 13.179: 98.9069% ( 1) 00:17:27.346 13.748 - 13.843: 98.9144% ( 1) 00:17:27.346 14.507 - 14.601: 98.9220% ( 1) 00:17:27.346 17.067 - 17.161: 98.9295% ( 1) 00:17:27.346 17.161 - 17.256: 98.9371% ( 1) 00:17:27.346 17.256 - 17.351: 98.9521% ( 2) 00:17:27.346 17.351 - 17.446: 98.9672% ( 2) 00:17:27.346 17.446 - 17.541: 98.9898% ( 3) 00:17:27.346 17.541 - 17.636: 98.9974% ( 1) 00:17:27.346 17.636 - 17.730: 99.0275% ( 4) 00:17:27.346 17.730 - 17.825: 99.0577% ( 4) 00:17:27.346 17.825 - 17.920: 99.0727% ( 2) 00:17:27.346 17.920 - 18.015: 99.1331% ( 8) 00:17:27.346 18.015 - 18.110: 99.2160% ( 11) 00:17:27.346 18.110 - 18.204: 99.3064% ( 12) 00:17:27.346 18.204 - 18.299: 99.3969% ( 12) 00:17:27.346 18.299 - 18.394: 99.4648% ( 9) 00:17:27.346 18.394 - 18.489: 99.5251% ( 8) 00:17:27.346 18.489 - 18.584: 99.6005% ( 10) 00:17:27.346 18.584 - 18.679: 99.6608% ( 8) 00:17:27.346 18.679 - 18.773: 99.7437% ( 11) 00:17:27.346 18.773 - 18.868: 99.7965% ( 7) 00:17:27.346 18.868 - 18.963: 99.8115% ( 2) 00:17:27.346 18.963 - 19.058: 99.8191% ( 1) 00:17:27.346 19.058 - 19.153: 99.8266% ( 1) 00:17:27.346 19.153 - 19.247: 99.8342% ( 1) 00:17:27.346 19.247 - 19.342: 99.8492% ( 2) 00:17:27.346 19.437 - 19.532: 99.8718% ( 3) 00:17:27.346 19.532 - 19.627: 99.8794% ( 1) 00:17:27.346 19.627 - 19.721: 99.8869% ( 1) 00:17:27.346 20.006 - 20.101: 99.9020% ( 2) 00:17:27.346 21.523 - 21.618: 99.9095% ( 1) 00:17:27.346 23.135 - 23.230: 99.9171% ( 1) 00:17:27.346 47.218 - 47.407: 99.9246% ( 1) 00:17:27.346 3980.705 - 4004.978: 99.9849% ( 8) 00:17:27.346 4004.978 - 4029.250: 100.0000% ( 2) 00:17:27.346 00:17:27.346 Complete histogram 00:17:27.346 ================== 00:17:27.346 Range in us Cumulative Count 00:17:27.346 2.062 - 2.074: 4.3272% ( 574) 00:17:27.346 2.074 - 2.086: 39.4421% ( 4658) 00:17:27.346 2.086 - 2.098: 46.9054% ( 990) 00:17:27.346 2.098 - 2.110: 51.8432% ( 655) 00:17:27.346 2.110 - 2.121: 60.1131% ( 1097) 00:17:27.346 2.121 - 2.133: 61.9676% ( 246) 00:17:27.346 2.133 - 2.145: 66.6491% ( 621) 00:17:27.346 2.145 - 2.157: 74.7305% ( 1072) 00:17:27.346 2.157 - 2.169: 75.6804% ( 126) 00:17:27.346 2.169 - 2.181: 78.7034% ( 401) 00:17:27.346 2.181 - 2.193: 81.3871% ( 356) 00:17:27.346 2.193 - 2.204: 81.9902% ( 80) 00:17:27.346 2.204 - 2.216: 83.8070% ( 241) 00:17:27.346 2.216 - 2.228: 89.0011% ( 689) 00:17:27.346 2.228 - 2.240: 90.7576% ( 233) 00:17:27.346 2.240 - 2.252: 92.1070% ( 179) 00:17:27.346 2.252 - 2.264: 93.4791% ( 182) 00:17:27.346 2.264 - 2.276: 93.8183% ( 45) 00:17:27.346 2.276 - 2.287: 94.0445% ( 30) 00:17:27.346 2.287 - 2.299: 94.7682% ( 96) 00:17:27.346 2.299 - 2.311: 95.3185% ( 73) 00:17:27.346 2.311 - 2.3[2024-07-26 18:17:53.113767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:27.346 23: 95.5070% ( 25) 00:17:27.346 2.323 - 2.335: 95.5673% ( 8) 00:17:27.346 2.335 - 2.347: 95.6276% ( 8) 00:17:27.346 2.347 - 2.359: 95.7331% ( 14) 00:17:27.346 2.359 - 2.370: 96.0573% ( 43) 00:17:27.346 2.370 - 2.382: 96.6001% ( 72) 00:17:27.346 2.382 - 2.394: 97.0675% ( 62) 00:17:27.346 2.394 - 2.406: 97.4444% ( 50) 00:17:27.346 2.406 - 2.418: 97.5876% ( 19) 00:17:27.346 2.418 - 2.430: 97.7083% ( 16) 00:17:27.346 2.430 - 2.441: 97.8364% ( 17) 00:17:27.346 2.441 - 2.453: 97.9193% ( 11) 00:17:27.347 2.453 - 2.465: 98.0023% ( 11) 00:17:27.347 2.465 - 2.477: 98.1530% ( 20) 00:17:27.347 2.477 - 2.489: 98.1907% ( 5) 00:17:27.347 2.489 - 2.501: 98.2661% ( 10) 00:17:27.347 2.501 - 2.513: 98.3490% ( 11) 00:17:27.347 2.513 - 2.524: 98.4018% ( 7) 00:17:27.347 2.524 - 2.536: 98.4169% ( 2) 00:17:27.347 2.536 - 2.548: 98.4320% ( 2) 00:17:27.347 2.548 - 2.560: 98.4395% ( 1) 00:17:27.347 2.572 - 2.584: 98.4470% ( 1) 00:17:27.347 2.607 - 2.619: 98.4546% ( 1) 00:17:27.347 2.619 - 2.631: 98.4621% ( 1) 00:17:27.347 2.655 - 2.667: 98.4697% ( 1) 00:17:27.347 2.679 - 2.690: 98.4772% ( 1) 00:17:27.347 2.714 - 2.726: 98.4847% ( 1) 00:17:27.347 2.750 - 2.761: 98.4923% ( 1) 00:17:27.347 2.868 - 2.880: 98.4998% ( 1) 00:17:27.347 2.892 - 2.904: 98.5074% ( 1) 00:17:27.347 3.390 - 3.413: 98.5149% ( 1) 00:17:27.347 3.413 - 3.437: 98.5375% ( 3) 00:17:27.347 3.437 - 3.461: 98.5450% ( 1) 00:17:27.347 3.484 - 3.508: 98.5601% ( 2) 00:17:27.347 3.532 - 3.556: 98.5677% ( 1) 00:17:27.347 3.556 - 3.579: 98.5827% ( 2) 00:17:27.347 3.603 - 3.627: 98.5903% ( 1) 00:17:27.347 3.627 - 3.650: 98.5978% ( 1) 00:17:27.347 3.650 - 3.674: 98.6129% ( 2) 00:17:27.347 3.674 - 3.698: 98.6280% ( 2) 00:17:27.347 3.721 - 3.745: 98.6355% ( 1) 00:17:27.347 3.793 - 3.816: 98.6581% ( 3) 00:17:27.347 3.864 - 3.887: 98.6657% ( 1) 00:17:27.347 3.887 - 3.911: 98.6807% ( 2) 00:17:27.347 4.101 - 4.124: 98.6883% ( 1) 00:17:27.347 5.025 - 5.049: 98.6958% ( 1) 00:17:27.347 5.120 - 5.144: 98.7034% ( 1) 00:17:27.347 5.144 - 5.167: 98.7109% ( 1) 00:17:27.347 5.499 - 5.523: 98.7184% ( 1) 00:17:27.347 5.523 - 5.547: 98.7260% ( 1) 00:17:27.347 5.570 - 5.594: 98.7335% ( 1) 00:17:27.347 5.689 - 5.713: 98.7410% ( 1) 00:17:27.347 5.807 - 5.831: 98.7486% ( 1) 00:17:27.347 5.879 - 5.902: 98.7561% ( 1) 00:17:27.347 6.044 - 6.068: 98.7712% ( 2) 00:17:27.347 6.068 - 6.116: 98.7863% ( 2) 00:17:27.347 6.258 - 6.305: 98.7938% ( 1) 00:17:27.347 6.400 - 6.447: 98.8014% ( 1) 00:17:27.347 7.016 - 7.064: 98.8089% ( 1) 00:17:27.347 7.253 - 7.301: 98.8164% ( 1) 00:17:27.347 8.913 - 8.960: 98.8240% ( 1) 00:17:27.347 11.046 - 11.093: 98.8315% ( 1) 00:17:27.347 15.739 - 15.834: 98.8466% ( 2) 00:17:27.347 15.929 - 16.024: 98.8692% ( 3) 00:17:27.347 16.024 - 16.119: 98.9069% ( 5) 00:17:27.347 16.119 - 16.213: 98.9144% ( 1) 00:17:27.347 16.213 - 16.308: 98.9446% ( 4) 00:17:27.347 16.308 - 16.403: 98.9823% ( 5) 00:17:27.347 16.403 - 16.498: 99.0351% ( 7) 00:17:27.347 16.498 - 16.593: 99.1104% ( 10) 00:17:27.347 16.593 - 16.687: 99.1783% ( 9) 00:17:27.347 16.687 - 16.782: 99.2235% ( 6) 00:17:27.347 16.782 - 16.877: 99.2386% ( 2) 00:17:27.347 16.877 - 16.972: 99.2612% ( 3) 00:17:27.347 16.972 - 17.067: 99.2763% ( 2) 00:17:27.347 17.067 - 17.161: 99.2838% ( 1) 00:17:27.347 17.161 - 17.256: 99.3140% ( 4) 00:17:27.347 17.256 - 17.351: 99.3215% ( 1) 00:17:27.347 17.446 - 17.541: 99.3291% ( 1) 00:17:27.347 17.541 - 17.636: 99.3517% ( 3) 00:17:27.347 17.636 - 17.730: 99.3592% ( 1) 00:17:27.347 17.730 - 17.825: 99.3743% ( 2) 00:17:27.347 18.110 - 18.204: 99.3818% ( 1) 00:17:27.347 18.773 - 18.868: 99.3894% ( 1) 00:17:27.347 21.144 - 21.239: 99.3969% ( 1) 00:17:27.347 3980.705 - 4004.978: 99.8794% ( 64) 00:17:27.347 4004.978 - 4029.250: 100.0000% ( 16) 00:17:27.347 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:27.347 [ 00:17:27.347 { 00:17:27.347 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:27.347 "subtype": "Discovery", 00:17:27.347 "listen_addresses": [], 00:17:27.347 "allow_any_host": true, 00:17:27.347 "hosts": [] 00:17:27.347 }, 00:17:27.347 { 00:17:27.347 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:27.347 "subtype": "NVMe", 00:17:27.347 "listen_addresses": [ 00:17:27.347 { 00:17:27.347 "trtype": "VFIOUSER", 00:17:27.347 "adrfam": "IPv4", 00:17:27.347 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:27.347 "trsvcid": "0" 00:17:27.347 } 00:17:27.347 ], 00:17:27.347 "allow_any_host": true, 00:17:27.347 "hosts": [], 00:17:27.347 "serial_number": "SPDK1", 00:17:27.347 "model_number": "SPDK bdev Controller", 00:17:27.347 "max_namespaces": 32, 00:17:27.347 "min_cntlid": 1, 00:17:27.347 "max_cntlid": 65519, 00:17:27.347 "namespaces": [ 00:17:27.347 { 00:17:27.347 "nsid": 1, 00:17:27.347 "bdev_name": "Malloc1", 00:17:27.347 "name": "Malloc1", 00:17:27.347 "nguid": "8DEDA8888A0045BA9C08C0028C7BD123", 00:17:27.347 "uuid": "8deda888-8a00-45ba-9c08-c0028c7bd123" 00:17:27.347 }, 00:17:27.347 { 00:17:27.347 "nsid": 2, 00:17:27.347 "bdev_name": "Malloc3", 00:17:27.347 "name": "Malloc3", 00:17:27.347 "nguid": "14E4227F11664F388A52038154C3FD04", 00:17:27.347 "uuid": "14e4227f-1166-4f38-8a52-038154c3fd04" 00:17:27.347 } 00:17:27.347 ] 00:17:27.347 }, 00:17:27.347 { 00:17:27.347 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:27.347 "subtype": "NVMe", 00:17:27.347 "listen_addresses": [ 00:17:27.347 { 00:17:27.347 "trtype": "VFIOUSER", 00:17:27.347 "adrfam": "IPv4", 00:17:27.347 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:27.347 "trsvcid": "0" 00:17:27.347 } 00:17:27.347 ], 00:17:27.347 "allow_any_host": true, 00:17:27.347 "hosts": [], 00:17:27.347 "serial_number": "SPDK2", 00:17:27.347 "model_number": "SPDK bdev Controller", 00:17:27.347 "max_namespaces": 32, 00:17:27.347 "min_cntlid": 1, 00:17:27.347 "max_cntlid": 65519, 00:17:27.347 "namespaces": [ 00:17:27.347 { 00:17:27.347 "nsid": 1, 00:17:27.347 "bdev_name": "Malloc2", 00:17:27.347 "name": "Malloc2", 00:17:27.347 "nguid": "30C6EFB699A140AFA36E9092BAD1F4B1", 00:17:27.347 "uuid": "30c6efb6-99a1-40af-a36e-9092bad1f4b1" 00:17:27.347 } 00:17:27.347 ] 00:17:27.347 } 00:17:27.347 ] 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1463336 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:27.347 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:27.347 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.604 [2024-07-26 18:17:53.558529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:27.604 Malloc4 00:17:27.604 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:27.862 [2024-07-26 18:17:53.967604] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:27.862 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:28.120 Asynchronous Event Request test 00:17:28.120 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:28.120 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:28.120 Registering asynchronous event callbacks... 00:17:28.120 Starting namespace attribute notice tests for all controllers... 00:17:28.120 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:28.120 aer_cb - Changed Namespace 00:17:28.120 Cleaning up... 00:17:28.120 [ 00:17:28.120 { 00:17:28.120 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:28.120 "subtype": "Discovery", 00:17:28.120 "listen_addresses": [], 00:17:28.120 "allow_any_host": true, 00:17:28.120 "hosts": [] 00:17:28.120 }, 00:17:28.120 { 00:17:28.120 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:28.120 "subtype": "NVMe", 00:17:28.120 "listen_addresses": [ 00:17:28.120 { 00:17:28.120 "trtype": "VFIOUSER", 00:17:28.120 "adrfam": "IPv4", 00:17:28.120 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:28.120 "trsvcid": "0" 00:17:28.120 } 00:17:28.120 ], 00:17:28.120 "allow_any_host": true, 00:17:28.120 "hosts": [], 00:17:28.120 "serial_number": "SPDK1", 00:17:28.120 "model_number": "SPDK bdev Controller", 00:17:28.120 "max_namespaces": 32, 00:17:28.120 "min_cntlid": 1, 00:17:28.120 "max_cntlid": 65519, 00:17:28.120 "namespaces": [ 00:17:28.120 { 00:17:28.120 "nsid": 1, 00:17:28.120 "bdev_name": "Malloc1", 00:17:28.120 "name": "Malloc1", 00:17:28.120 "nguid": "8DEDA8888A0045BA9C08C0028C7BD123", 00:17:28.120 "uuid": "8deda888-8a00-45ba-9c08-c0028c7bd123" 00:17:28.120 }, 00:17:28.120 { 00:17:28.120 "nsid": 2, 00:17:28.120 "bdev_name": "Malloc3", 00:17:28.120 "name": "Malloc3", 00:17:28.120 "nguid": "14E4227F11664F388A52038154C3FD04", 00:17:28.120 "uuid": "14e4227f-1166-4f38-8a52-038154c3fd04" 00:17:28.120 } 00:17:28.120 ] 00:17:28.120 }, 00:17:28.120 { 00:17:28.120 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:28.120 "subtype": "NVMe", 00:17:28.120 "listen_addresses": [ 00:17:28.120 { 00:17:28.120 "trtype": "VFIOUSER", 00:17:28.120 "adrfam": "IPv4", 00:17:28.120 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:28.120 "trsvcid": "0" 00:17:28.120 } 00:17:28.120 ], 00:17:28.120 "allow_any_host": true, 00:17:28.120 "hosts": [], 00:17:28.120 "serial_number": "SPDK2", 00:17:28.120 "model_number": "SPDK bdev Controller", 00:17:28.120 "max_namespaces": 32, 00:17:28.120 "min_cntlid": 1, 00:17:28.120 "max_cntlid": 65519, 00:17:28.120 "namespaces": [ 00:17:28.120 { 00:17:28.120 "nsid": 1, 00:17:28.120 "bdev_name": "Malloc2", 00:17:28.120 "name": "Malloc2", 00:17:28.120 "nguid": "30C6EFB699A140AFA36E9092BAD1F4B1", 00:17:28.120 "uuid": "30c6efb6-99a1-40af-a36e-9092bad1f4b1" 00:17:28.120 }, 00:17:28.120 { 00:17:28.120 "nsid": 2, 00:17:28.120 "bdev_name": "Malloc4", 00:17:28.120 "name": "Malloc4", 00:17:28.120 "nguid": "EC9679CAB4EA499F9EF3D34FC0D66FCD", 00:17:28.120 "uuid": "ec9679ca-b4ea-499f-9ef3-d34fc0d66fcd" 00:17:28.120 } 00:17:28.120 ] 00:17:28.120 } 00:17:28.120 ] 00:17:28.120 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1463336 00:17:28.120 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:28.120 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1457735 00:17:28.120 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1457735 ']' 00:17:28.120 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1457735 00:17:28.120 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:28.120 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:28.120 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1457735 00:17:28.378 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:28.378 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:28.378 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1457735' 00:17:28.378 killing process with pid 1457735 00:17:28.378 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1457735 00:17:28.378 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1457735 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1463477 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1463477' 00:17:28.638 Process pid: 1463477 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1463477 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1463477 ']' 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:28.638 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:28.638 [2024-07-26 18:17:54.620284] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:28.638 [2024-07-26 18:17:54.621271] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:17:28.638 [2024-07-26 18:17:54.621324] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.638 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.638 [2024-07-26 18:17:54.651638] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:28.638 [2024-07-26 18:17:54.683510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:28.897 [2024-07-26 18:17:54.782867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.897 [2024-07-26 18:17:54.782933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.897 [2024-07-26 18:17:54.782949] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.897 [2024-07-26 18:17:54.782963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.897 [2024-07-26 18:17:54.782976] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.898 [2024-07-26 18:17:54.783037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.898 [2024-07-26 18:17:54.783116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.898 [2024-07-26 18:17:54.783144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.898 [2024-07-26 18:17:54.783149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.898 [2024-07-26 18:17:54.889309] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:28.898 [2024-07-26 18:17:54.889511] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:28.898 [2024-07-26 18:17:54.889788] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:28.898 [2024-07-26 18:17:54.890382] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:28.898 [2024-07-26 18:17:54.890622] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:28.898 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:28.898 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:28.898 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:29.834 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:30.092 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:30.092 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:30.092 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:30.092 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:30.092 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:30.350 Malloc1 00:17:30.350 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:30.608 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:30.866 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:31.125 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:31.125 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:31.125 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:31.382 Malloc2 00:17:31.382 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:31.640 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:31.897 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:32.155 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:32.156 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1463477 00:17:32.156 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1463477 ']' 00:17:32.156 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1463477 00:17:32.156 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:32.156 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.156 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1463477 00:17:32.156 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:32.156 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:32.156 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1463477' 00:17:32.156 killing process with pid 1463477 00:17:32.156 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1463477 00:17:32.156 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1463477 00:17:32.414 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:32.414 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:32.414 00:17:32.414 real 0m52.936s 00:17:32.414 user 3m29.074s 00:17:32.414 sys 0m4.314s 00:17:32.414 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.414 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:32.414 ************************************ 00:17:32.414 END TEST nvmf_vfio_user 00:17:32.414 ************************************ 00:17:32.414 18:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:32.414 18:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:32.414 18:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.414 18:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:32.673 ************************************ 00:17:32.673 START TEST nvmf_vfio_user_nvme_compliance 00:17:32.673 ************************************ 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:32.673 * Looking for test storage... 00:17:32.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.673 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1464068 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1464068' 00:17:32.674 Process pid: 1464068 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1464068 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1464068 ']' 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.674 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:32.674 [2024-07-26 18:17:58.683980] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:17:32.674 [2024-07-26 18:17:58.684079] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.674 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.674 [2024-07-26 18:17:58.717320] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:32.674 [2024-07-26 18:17:58.743503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:32.934 [2024-07-26 18:17:58.829253] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.934 [2024-07-26 18:17:58.829308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.934 [2024-07-26 18:17:58.829338] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.934 [2024-07-26 18:17:58.829350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.934 [2024-07-26 18:17:58.829360] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.934 [2024-07-26 18:17:58.829491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.934 [2024-07-26 18:17:58.829557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.934 [2024-07-26 18:17:58.829560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.934 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.934 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:32.934 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:33.872 malloc0 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.872 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:33.872 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.872 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:33.872 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.872 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:33.872 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.872 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:34.130 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.130 00:17:34.130 00:17:34.130 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.130 http://cunit.sourceforge.net/ 00:17:34.130 00:17:34.130 00:17:34.130 Suite: nvme_compliance 00:17:34.131 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-26 18:18:00.167503] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:34.131 [2024-07-26 18:18:00.168982] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:34.131 [2024-07-26 18:18:00.169006] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:34.131 [2024-07-26 18:18:00.169033] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:34.131 [2024-07-26 18:18:00.170520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:34.131 passed 00:17:34.131 Test: admin_identify_ctrlr_verify_fused ...[2024-07-26 18:18:00.256135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:34.131 [2024-07-26 18:18:00.259150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:34.389 passed 00:17:34.389 Test: admin_identify_ns ...[2024-07-26 18:18:00.347940] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:34.389 [2024-07-26 18:18:00.409079] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:34.389 [2024-07-26 18:18:00.417078] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:34.389 [2024-07-26 18:18:00.437202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:34.389 passed 00:17:34.389 Test: admin_get_features_mandatory_features ...[2024-07-26 18:18:00.521847] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:34.389 [2024-07-26 18:18:00.524869] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:34.647 passed 00:17:34.647 Test: admin_get_features_optional_features ...[2024-07-26 18:18:00.610463] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:34.647 [2024-07-26 18:18:00.613486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:34.647 passed 00:17:34.647 Test: admin_set_features_number_of_queues ...[2024-07-26 18:18:00.698962] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:34.906 [2024-07-26 18:18:00.800291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:34.906 passed 00:17:34.906 Test: admin_get_log_page_mandatory_logs ...[2024-07-26 18:18:00.887027] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:34.906 [2024-07-26 18:18:00.890071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:34.906 passed 00:17:34.906 Test: admin_get_log_page_with_lpo ...[2024-07-26 18:18:00.976541] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:34.906 [2024-07-26 18:18:01.044096] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:35.169 [2024-07-26 18:18:01.057198] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:35.169 passed 00:17:35.169 Test: fabric_property_get ...[2024-07-26 18:18:01.142594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:35.169 [2024-07-26 18:18:01.143871] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:35.169 [2024-07-26 18:18:01.145617] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:35.169 passed 00:17:35.169 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-26 18:18:01.228189] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:35.169 [2024-07-26 18:18:01.229494] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:35.169 [2024-07-26 18:18:01.231212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:35.169 passed 00:17:35.473 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-26 18:18:01.314440] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:35.473 [2024-07-26 18:18:01.402073] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:35.473 [2024-07-26 18:18:01.414069] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:35.473 [2024-07-26 18:18:01.418266] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:35.473 passed 00:17:35.473 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-26 18:18:01.504346] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:35.473 [2024-07-26 18:18:01.505620] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:35.473 [2024-07-26 18:18:01.507383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:35.473 passed 00:17:35.473 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-26 18:18:01.589623] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:35.731 [2024-07-26 18:18:01.664070] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:35.731 [2024-07-26 18:18:01.688085] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:35.731 [2024-07-26 18:18:01.693178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:35.731 passed 00:17:35.731 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-26 18:18:01.776971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:35.731 [2024-07-26 18:18:01.778261] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:35.731 [2024-07-26 18:18:01.778315] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:35.731 [2024-07-26 18:18:01.779997] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:35.731 passed 00:17:35.731 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-26 18:18:01.867369] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:35.989 [2024-07-26 18:18:01.959075] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:35.989 [2024-07-26 18:18:01.967100] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:35.989 [2024-07-26 18:18:01.975074] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:35.989 [2024-07-26 18:18:01.983071] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:35.989 [2024-07-26 18:18:02.012172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:35.989 passed 00:17:35.989 Test: admin_create_io_sq_verify_pc ...[2024-07-26 18:18:02.095881] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:35.989 [2024-07-26 18:18:02.112088] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:35.989 [2024-07-26 18:18:02.129394] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:36.247 passed 00:17:36.247 Test: admin_create_io_qp_max_qps ...[2024-07-26 18:18:02.211974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:37.183 [2024-07-26 18:18:03.310077] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:37.748 [2024-07-26 18:18:03.697739] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:37.748 passed 00:17:37.748 Test: admin_create_io_sq_shared_cq ...[2024-07-26 18:18:03.778658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:38.007 [2024-07-26 18:18:03.914069] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:38.007 [2024-07-26 18:18:03.951154] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:38.007 passed 00:17:38.007 00:17:38.007 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.007 suites 1 1 n/a 0 0 00:17:38.007 tests 18 18 18 0 0 00:17:38.007 asserts 360 360 360 0 n/a 00:17:38.007 00:17:38.007 Elapsed time = 1.564 seconds 00:17:38.007 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1464068 00:17:38.007 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1464068 ']' 00:17:38.007 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1464068 00:17:38.007 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:38.007 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:38.007 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464068 00:17:38.007 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:38.007 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:38.007 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464068' 00:17:38.007 killing process with pid 1464068 00:17:38.007 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1464068 00:17:38.007 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1464068 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:38.266 00:17:38.266 real 0m5.694s 00:17:38.266 user 0m16.081s 00:17:38.266 sys 0m0.523s 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:38.266 ************************************ 00:17:38.266 END TEST nvmf_vfio_user_nvme_compliance 00:17:38.266 ************************************ 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.266 ************************************ 00:17:38.266 START TEST nvmf_vfio_user_fuzz 00:17:38.266 ************************************ 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:38.266 * Looking for test storage... 00:17:38.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:38.266 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1464906 00:17:38.267 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:38.267 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1464906' 00:17:38.267 Process pid: 1464906 00:17:38.267 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:38.267 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1464906 00:17:38.267 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1464906 ']' 00:17:38.267 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.267 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:38.267 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.267 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:38.267 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:38.835 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.835 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:38.835 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:39.772 malloc0 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:39.772 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:11.840 Fuzzing completed. Shutting down the fuzz application 00:18:11.840 00:18:11.840 Dumping successful admin opcodes: 00:18:11.840 8, 9, 10, 24, 00:18:11.840 Dumping successful io opcodes: 00:18:11.840 0, 00:18:11.840 NS: 0x200003a1ef00 I/O qp, Total commands completed: 575097, total successful commands: 2215, random_seed: 939873920 00:18:11.840 NS: 0x200003a1ef00 admin qp, Total commands completed: 75564, total successful commands: 590, random_seed: 411961792 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1464906 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1464906 ']' 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1464906 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464906 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464906' 00:18:11.840 killing process with pid 1464906 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1464906 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1464906 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:11.840 00:18:11.840 real 0m32.191s 00:18:11.840 user 0m31.387s 00:18:11.840 sys 0m28.905s 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:11.840 ************************************ 00:18:11.840 END TEST nvmf_vfio_user_fuzz 00:18:11.840 ************************************ 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:11.840 ************************************ 00:18:11.840 START TEST nvmf_auth_target 00:18:11.840 ************************************ 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:11.840 * Looking for test storage... 00:18:11.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.840 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:11.841 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:12.778 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:12.778 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:12.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:12.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:12.778 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:12.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:18:12.779 00:18:12.779 --- 10.0.0.2 ping statistics --- 00:18:12.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.779 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:12.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:18:12.779 00:18:12.779 --- 10.0.0.1 ping statistics --- 00:18:12.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.779 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1470811 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1470811 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1470811 ']' 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.779 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1470864 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=abbd4334305383eada5cccebba828ed0ac9dd8b15280796f 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.awf 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key abbd4334305383eada5cccebba828ed0ac9dd8b15280796f 0 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 abbd4334305383eada5cccebba828ed0ac9dd8b15280796f 0 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=abbd4334305383eada5cccebba828ed0ac9dd8b15280796f 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.awf 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.awf 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.awf 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:13.037 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e39b3325d748895615712d4fea2a6465d6ef51910ebb5cd6cf82a988bdebdee9 00:18:13.038 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:13.038 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.UE1 00:18:13.038 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e39b3325d748895615712d4fea2a6465d6ef51910ebb5cd6cf82a988bdebdee9 3 00:18:13.038 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e39b3325d748895615712d4fea2a6465d6ef51910ebb5cd6cf82a988bdebdee9 3 00:18:13.038 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.038 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:13.038 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e39b3325d748895615712d4fea2a6465d6ef51910ebb5cd6cf82a988bdebdee9 00:18:13.038 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:13.038 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.UE1 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.UE1 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.UE1 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bff5374c72cc900724a61edc4c3c8d94 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0zr 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bff5374c72cc900724a61edc4c3c8d94 1 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bff5374c72cc900724a61edc4c3c8d94 1 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bff5374c72cc900724a61edc4c3c8d94 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0zr 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0zr 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.0zr 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6ebac6e82e05a6ce094bc409e4e7ee9c8077d2a522943165 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fW8 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6ebac6e82e05a6ce094bc409e4e7ee9c8077d2a522943165 2 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6ebac6e82e05a6ce094bc409e4e7ee9c8077d2a522943165 2 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6ebac6e82e05a6ce094bc409e4e7ee9c8077d2a522943165 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fW8 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fW8 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.fW8 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=57009ce4eb49faa59225c7c85d9a9163e726a0a5f3cb71f3 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HRW 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 57009ce4eb49faa59225c7c85d9a9163e726a0a5f3cb71f3 2 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 57009ce4eb49faa59225c7c85d9a9163e726a0a5f3cb71f3 2 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=57009ce4eb49faa59225c7c85d9a9163e726a0a5f3cb71f3 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HRW 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HRW 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.HRW 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=768f2c46662773bdd1c61cb46ff9ec3c 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2Zl 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 768f2c46662773bdd1c61cb46ff9ec3c 1 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 768f2c46662773bdd1c61cb46ff9ec3c 1 00:18:13.297 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=768f2c46662773bdd1c61cb46ff9ec3c 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2Zl 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2Zl 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.2Zl 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=037e20952a4f7516eff65e2a1f1ea9c0f9be2ce7e7a57d0b45613c226626be1c 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.X8B 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 037e20952a4f7516eff65e2a1f1ea9c0f9be2ce7e7a57d0b45613c226626be1c 3 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 037e20952a4f7516eff65e2a1f1ea9c0f9be2ce7e7a57d0b45613c226626be1c 3 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=037e20952a4f7516eff65e2a1f1ea9c0f9be2ce7e7a57d0b45613c226626be1c 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:13.298 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.556 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.X8B 00:18:13.556 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.X8B 00:18:13.556 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.X8B 00:18:13.556 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:13.556 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1470811 00:18:13.556 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1470811 ']' 00:18:13.556 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.556 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.556 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.556 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.556 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.814 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.814 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:13.814 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1470864 /var/tmp/host.sock 00:18:13.814 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1470864 ']' 00:18:13.814 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:13.814 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.814 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:13.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:13.814 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.814 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.072 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:14.072 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:14.072 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:14.072 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.072 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.072 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.072 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:14.072 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.awf 00:18:14.072 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.072 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.072 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.072 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.awf 00:18:14.072 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.awf 00:18:14.330 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.UE1 ]] 00:18:14.330 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UE1 00:18:14.330 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.330 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.330 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.330 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UE1 00:18:14.330 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UE1 00:18:14.588 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:14.588 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0zr 00:18:14.588 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.588 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.588 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.588 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.0zr 00:18:14.588 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.0zr 00:18:14.846 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.fW8 ]] 00:18:14.846 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fW8 00:18:14.846 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.846 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.846 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.846 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fW8 00:18:14.846 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fW8 00:18:15.104 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:15.104 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.HRW 00:18:15.104 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.104 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.104 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.104 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.HRW 00:18:15.104 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.HRW 00:18:15.362 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.2Zl ]] 00:18:15.362 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2Zl 00:18:15.362 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.362 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.362 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.362 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2Zl 00:18:15.362 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2Zl 00:18:15.619 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:15.619 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.X8B 00:18:15.619 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.619 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.619 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.619 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.X8B 00:18:15.619 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.X8B 00:18:15.876 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:15.876 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:15.876 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.876 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.876 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:15.876 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:16.134 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:16.134 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.134 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.134 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:16.134 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:16.134 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.134 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.134 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.134 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.134 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.134 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.134 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.392 00:18:16.392 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.392 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.392 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.650 { 00:18:16.650 "cntlid": 1, 00:18:16.650 "qid": 0, 00:18:16.650 "state": "enabled", 00:18:16.650 "thread": "nvmf_tgt_poll_group_000", 00:18:16.650 "listen_address": { 00:18:16.650 "trtype": "TCP", 00:18:16.650 "adrfam": "IPv4", 00:18:16.650 "traddr": "10.0.0.2", 00:18:16.650 "trsvcid": "4420" 00:18:16.650 }, 00:18:16.650 "peer_address": { 00:18:16.650 "trtype": "TCP", 00:18:16.650 "adrfam": "IPv4", 00:18:16.650 "traddr": "10.0.0.1", 00:18:16.650 "trsvcid": "35960" 00:18:16.650 }, 00:18:16.650 "auth": { 00:18:16.650 "state": "completed", 00:18:16.650 "digest": "sha256", 00:18:16.650 "dhgroup": "null" 00:18:16.650 } 00:18:16.650 } 00:18:16.650 ]' 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.650 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.909 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:18:18.280 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.280 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:18.280 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.280 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.280 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.570 00:18:18.570 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.570 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.570 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.828 { 00:18:18.828 "cntlid": 3, 00:18:18.828 "qid": 0, 00:18:18.828 "state": "enabled", 00:18:18.828 "thread": "nvmf_tgt_poll_group_000", 00:18:18.828 "listen_address": { 00:18:18.828 "trtype": "TCP", 00:18:18.828 "adrfam": "IPv4", 00:18:18.828 "traddr": "10.0.0.2", 00:18:18.828 "trsvcid": "4420" 00:18:18.828 }, 00:18:18.828 "peer_address": { 00:18:18.828 "trtype": "TCP", 00:18:18.828 "adrfam": "IPv4", 00:18:18.828 "traddr": "10.0.0.1", 00:18:18.828 "trsvcid": "35978" 00:18:18.828 }, 00:18:18.828 "auth": { 00:18:18.828 "state": "completed", 00:18:18.828 "digest": "sha256", 00:18:18.828 "dhgroup": "null" 00:18:18.828 } 00:18:18.828 } 00:18:18.828 ]' 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.828 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.085 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:18:20.019 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.019 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:20.019 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.019 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.019 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.019 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.019 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:20.019 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:20.276 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:20.276 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.276 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.276 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:20.276 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:20.276 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.276 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.276 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.276 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.276 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.276 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.277 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.844 00:18:20.844 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.844 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.844 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.844 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.844 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.844 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.844 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.844 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.844 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.844 { 00:18:20.844 "cntlid": 5, 00:18:20.844 "qid": 0, 00:18:20.844 "state": "enabled", 00:18:20.844 "thread": "nvmf_tgt_poll_group_000", 00:18:20.844 "listen_address": { 00:18:20.844 "trtype": "TCP", 00:18:20.844 "adrfam": "IPv4", 00:18:20.844 "traddr": "10.0.0.2", 00:18:20.844 "trsvcid": "4420" 00:18:20.844 }, 00:18:20.844 "peer_address": { 00:18:20.844 "trtype": "TCP", 00:18:20.844 "adrfam": "IPv4", 00:18:20.844 "traddr": "10.0.0.1", 00:18:20.844 "trsvcid": "36000" 00:18:20.844 }, 00:18:20.844 "auth": { 00:18:20.844 "state": "completed", 00:18:20.844 "digest": "sha256", 00:18:20.844 "dhgroup": "null" 00:18:20.844 } 00:18:20.844 } 00:18:20.844 ]' 00:18:20.844 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.103 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.103 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.103 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:21.103 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.103 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.103 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.103 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.360 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:18:22.298 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.298 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:22.298 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.298 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.298 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.298 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.298 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:22.298 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:22.555 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:22.555 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.555 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.555 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:22.555 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:22.555 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.555 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:22.555 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.555 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.555 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.555 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.555 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.813 00:18:22.813 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.813 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.813 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.070 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.070 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.070 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.070 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.070 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.070 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.070 { 00:18:23.070 "cntlid": 7, 00:18:23.070 "qid": 0, 00:18:23.070 "state": "enabled", 00:18:23.070 "thread": "nvmf_tgt_poll_group_000", 00:18:23.070 "listen_address": { 00:18:23.070 "trtype": "TCP", 00:18:23.070 "adrfam": "IPv4", 00:18:23.070 "traddr": "10.0.0.2", 00:18:23.070 "trsvcid": "4420" 00:18:23.070 }, 00:18:23.070 "peer_address": { 00:18:23.070 "trtype": "TCP", 00:18:23.070 "adrfam": "IPv4", 00:18:23.070 "traddr": "10.0.0.1", 00:18:23.070 "trsvcid": "36024" 00:18:23.070 }, 00:18:23.070 "auth": { 00:18:23.070 "state": "completed", 00:18:23.070 "digest": "sha256", 00:18:23.070 "dhgroup": "null" 00:18:23.070 } 00:18:23.070 } 00:18:23.070 ]' 00:18:23.070 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.070 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.070 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.070 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:23.070 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.328 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.328 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.328 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.587 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:18:24.525 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.525 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:24.525 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.525 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.525 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.525 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.525 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.525 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:24.525 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:24.526 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:24.526 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.526 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.526 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:24.526 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:24.526 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.526 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.526 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.526 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.526 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.526 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.526 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.090 00:18:25.090 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.091 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.091 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.091 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.091 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.091 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.091 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.091 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.091 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.091 { 00:18:25.091 "cntlid": 9, 00:18:25.091 "qid": 0, 00:18:25.091 "state": "enabled", 00:18:25.091 "thread": "nvmf_tgt_poll_group_000", 00:18:25.091 "listen_address": { 00:18:25.091 "trtype": "TCP", 00:18:25.091 "adrfam": "IPv4", 00:18:25.091 "traddr": "10.0.0.2", 00:18:25.091 "trsvcid": "4420" 00:18:25.091 }, 00:18:25.091 "peer_address": { 00:18:25.091 "trtype": "TCP", 00:18:25.091 "adrfam": "IPv4", 00:18:25.091 "traddr": "10.0.0.1", 00:18:25.091 "trsvcid": "34490" 00:18:25.091 }, 00:18:25.091 "auth": { 00:18:25.091 "state": "completed", 00:18:25.091 "digest": "sha256", 00:18:25.091 "dhgroup": "ffdhe2048" 00:18:25.091 } 00:18:25.091 } 00:18:25.091 ]' 00:18:25.091 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.348 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.348 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.348 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:25.348 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.348 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.348 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.348 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.606 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:18:26.539 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.539 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.539 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.539 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.539 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.539 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.539 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:26.539 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:26.797 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:26.797 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.797 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.797 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:26.797 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.797 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.797 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.797 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.797 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.797 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.797 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.797 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.056 00:18:27.057 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.057 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.057 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.314 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.314 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.314 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.314 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.314 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.314 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.314 { 00:18:27.314 "cntlid": 11, 00:18:27.314 "qid": 0, 00:18:27.314 "state": "enabled", 00:18:27.314 "thread": "nvmf_tgt_poll_group_000", 00:18:27.314 "listen_address": { 00:18:27.314 "trtype": "TCP", 00:18:27.314 "adrfam": "IPv4", 00:18:27.314 "traddr": "10.0.0.2", 00:18:27.314 "trsvcid": "4420" 00:18:27.314 }, 00:18:27.314 "peer_address": { 00:18:27.314 "trtype": "TCP", 00:18:27.314 "adrfam": "IPv4", 00:18:27.314 "traddr": "10.0.0.1", 00:18:27.314 "trsvcid": "34504" 00:18:27.314 }, 00:18:27.314 "auth": { 00:18:27.314 "state": "completed", 00:18:27.314 "digest": "sha256", 00:18:27.314 "dhgroup": "ffdhe2048" 00:18:27.314 } 00:18:27.314 } 00:18:27.314 ]' 00:18:27.314 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.314 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.314 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.571 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:27.571 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.571 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.571 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.571 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.828 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:18:28.759 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.759 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:28.759 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.759 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.759 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.759 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.759 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:28.759 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:29.017 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:29.017 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.017 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:29.017 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:29.017 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:29.017 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.017 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.017 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.017 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.017 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.017 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.017 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.275 00:18:29.275 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.275 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.275 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.532 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.532 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.532 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.532 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.532 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.532 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.532 { 00:18:29.532 "cntlid": 13, 00:18:29.532 "qid": 0, 00:18:29.532 "state": "enabled", 00:18:29.532 "thread": "nvmf_tgt_poll_group_000", 00:18:29.532 "listen_address": { 00:18:29.532 "trtype": "TCP", 00:18:29.532 "adrfam": "IPv4", 00:18:29.532 "traddr": "10.0.0.2", 00:18:29.532 "trsvcid": "4420" 00:18:29.532 }, 00:18:29.533 "peer_address": { 00:18:29.533 "trtype": "TCP", 00:18:29.533 "adrfam": "IPv4", 00:18:29.533 "traddr": "10.0.0.1", 00:18:29.533 "trsvcid": "34530" 00:18:29.533 }, 00:18:29.533 "auth": { 00:18:29.533 "state": "completed", 00:18:29.533 "digest": "sha256", 00:18:29.533 "dhgroup": "ffdhe2048" 00:18:29.533 } 00:18:29.533 } 00:18:29.533 ]' 00:18:29.533 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.533 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.533 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.790 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.790 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.790 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.790 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.790 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.048 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:18:30.982 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.982 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.982 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.982 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.982 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.982 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.982 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:30.982 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:31.239 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:31.239 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.239 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:31.239 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:31.239 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.239 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.239 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:31.239 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.239 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.239 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.239 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.239 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.496 00:18:31.496 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.496 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.496 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.754 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.754 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.754 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.754 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.754 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.754 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.754 { 00:18:31.754 "cntlid": 15, 00:18:31.754 "qid": 0, 00:18:31.754 "state": "enabled", 00:18:31.754 "thread": "nvmf_tgt_poll_group_000", 00:18:31.754 "listen_address": { 00:18:31.754 "trtype": "TCP", 00:18:31.754 "adrfam": "IPv4", 00:18:31.754 "traddr": "10.0.0.2", 00:18:31.754 "trsvcid": "4420" 00:18:31.754 }, 00:18:31.754 "peer_address": { 00:18:31.754 "trtype": "TCP", 00:18:31.754 "adrfam": "IPv4", 00:18:31.754 "traddr": "10.0.0.1", 00:18:31.754 "trsvcid": "34556" 00:18:31.754 }, 00:18:31.754 "auth": { 00:18:31.754 "state": "completed", 00:18:31.754 "digest": "sha256", 00:18:31.754 "dhgroup": "ffdhe2048" 00:18:31.754 } 00:18:31.754 } 00:18:31.754 ]' 00:18:31.754 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.754 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.754 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.754 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.754 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.012 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.012 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.012 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.270 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:18:33.202 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.202 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.202 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.202 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.202 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.202 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.202 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.202 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:33.202 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:33.460 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:33.460 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.460 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.460 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:33.460 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.460 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.460 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.460 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.460 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.460 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.460 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.460 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.718 00:18:33.718 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.718 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.718 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.976 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.976 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.976 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.976 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.976 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.976 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.976 { 00:18:33.976 "cntlid": 17, 00:18:33.976 "qid": 0, 00:18:33.976 "state": "enabled", 00:18:33.976 "thread": "nvmf_tgt_poll_group_000", 00:18:33.976 "listen_address": { 00:18:33.976 "trtype": "TCP", 00:18:33.976 "adrfam": "IPv4", 00:18:33.976 "traddr": "10.0.0.2", 00:18:33.976 "trsvcid": "4420" 00:18:33.976 }, 00:18:33.976 "peer_address": { 00:18:33.976 "trtype": "TCP", 00:18:33.976 "adrfam": "IPv4", 00:18:33.976 "traddr": "10.0.0.1", 00:18:33.976 "trsvcid": "39238" 00:18:33.976 }, 00:18:33.976 "auth": { 00:18:33.976 "state": "completed", 00:18:33.976 "digest": "sha256", 00:18:33.976 "dhgroup": "ffdhe3072" 00:18:33.976 } 00:18:33.976 } 00:18:33.976 ]' 00:18:33.976 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.976 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.976 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.976 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:33.976 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.234 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.234 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.234 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.491 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:18:35.455 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.455 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.455 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.455 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.455 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.455 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.456 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:35.456 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:35.713 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:35.713 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.713 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.713 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:35.713 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:35.713 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.713 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.713 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.713 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.713 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.713 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.713 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.970 00:18:35.970 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.970 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.970 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.228 { 00:18:36.228 "cntlid": 19, 00:18:36.228 "qid": 0, 00:18:36.228 "state": "enabled", 00:18:36.228 "thread": "nvmf_tgt_poll_group_000", 00:18:36.228 "listen_address": { 00:18:36.228 "trtype": "TCP", 00:18:36.228 "adrfam": "IPv4", 00:18:36.228 "traddr": "10.0.0.2", 00:18:36.228 "trsvcid": "4420" 00:18:36.228 }, 00:18:36.228 "peer_address": { 00:18:36.228 "trtype": "TCP", 00:18:36.228 "adrfam": "IPv4", 00:18:36.228 "traddr": "10.0.0.1", 00:18:36.228 "trsvcid": "39264" 00:18:36.228 }, 00:18:36.228 "auth": { 00:18:36.228 "state": "completed", 00:18:36.228 "digest": "sha256", 00:18:36.228 "dhgroup": "ffdhe3072" 00:18:36.228 } 00:18:36.228 } 00:18:36.228 ]' 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.228 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.486 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:18:37.420 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.420 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.420 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.420 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.677 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.677 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.677 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:37.677 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:37.933 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:37.933 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.933 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.933 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:37.933 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:37.933 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.933 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.933 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.933 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.933 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.933 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.933 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.191 00:18:38.191 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.191 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.191 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.448 { 00:18:38.448 "cntlid": 21, 00:18:38.448 "qid": 0, 00:18:38.448 "state": "enabled", 00:18:38.448 "thread": "nvmf_tgt_poll_group_000", 00:18:38.448 "listen_address": { 00:18:38.448 "trtype": "TCP", 00:18:38.448 "adrfam": "IPv4", 00:18:38.448 "traddr": "10.0.0.2", 00:18:38.448 "trsvcid": "4420" 00:18:38.448 }, 00:18:38.448 "peer_address": { 00:18:38.448 "trtype": "TCP", 00:18:38.448 "adrfam": "IPv4", 00:18:38.448 "traddr": "10.0.0.1", 00:18:38.448 "trsvcid": "39284" 00:18:38.448 }, 00:18:38.448 "auth": { 00:18:38.448 "state": "completed", 00:18:38.448 "digest": "sha256", 00:18:38.448 "dhgroup": "ffdhe3072" 00:18:38.448 } 00:18:38.448 } 00:18:38.448 ]' 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.448 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.706 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:18:39.636 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.636 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.636 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.636 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.636 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.636 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.636 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.636 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.893 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:39.893 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.893 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.893 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:39.893 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:39.893 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.893 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:39.893 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.893 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.893 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.893 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.893 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.457 00:18:40.457 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.457 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.457 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.714 { 00:18:40.714 "cntlid": 23, 00:18:40.714 "qid": 0, 00:18:40.714 "state": "enabled", 00:18:40.714 "thread": "nvmf_tgt_poll_group_000", 00:18:40.714 "listen_address": { 00:18:40.714 "trtype": "TCP", 00:18:40.714 "adrfam": "IPv4", 00:18:40.714 "traddr": "10.0.0.2", 00:18:40.714 "trsvcid": "4420" 00:18:40.714 }, 00:18:40.714 "peer_address": { 00:18:40.714 "trtype": "TCP", 00:18:40.714 "adrfam": "IPv4", 00:18:40.714 "traddr": "10.0.0.1", 00:18:40.714 "trsvcid": "39314" 00:18:40.714 }, 00:18:40.714 "auth": { 00:18:40.714 "state": "completed", 00:18:40.714 "digest": "sha256", 00:18:40.714 "dhgroup": "ffdhe3072" 00:18:40.714 } 00:18:40.714 } 00:18:40.714 ]' 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.714 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.971 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:18:41.904 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.904 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:41.904 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.904 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.904 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.904 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.904 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.904 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:41.904 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:42.162 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:42.162 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.162 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.162 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:42.162 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:42.162 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.162 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.162 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.162 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.162 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.162 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.162 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.727 00:18:42.727 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.727 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.727 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.727 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.727 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.727 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.727 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.727 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.727 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.727 { 00:18:42.727 "cntlid": 25, 00:18:42.727 "qid": 0, 00:18:42.727 "state": "enabled", 00:18:42.727 "thread": "nvmf_tgt_poll_group_000", 00:18:42.727 "listen_address": { 00:18:42.727 "trtype": "TCP", 00:18:42.727 "adrfam": "IPv4", 00:18:42.727 "traddr": "10.0.0.2", 00:18:42.727 "trsvcid": "4420" 00:18:42.727 }, 00:18:42.727 "peer_address": { 00:18:42.727 "trtype": "TCP", 00:18:42.727 "adrfam": "IPv4", 00:18:42.727 "traddr": "10.0.0.1", 00:18:42.727 "trsvcid": "39340" 00:18:42.727 }, 00:18:42.727 "auth": { 00:18:42.727 "state": "completed", 00:18:42.727 "digest": "sha256", 00:18:42.727 "dhgroup": "ffdhe4096" 00:18:42.727 } 00:18:42.727 } 00:18:42.727 ]' 00:18:42.727 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.984 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.984 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.984 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:42.984 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.984 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.984 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.984 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.241 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:18:44.175 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.175 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.175 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.175 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.175 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.175 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.175 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:44.175 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:44.432 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:44.432 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.432 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.432 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:44.432 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.432 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.432 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.432 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.432 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.432 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.432 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.433 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.690 00:18:44.947 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.947 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.947 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.947 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.947 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.947 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.947 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.947 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.947 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.947 { 00:18:44.947 "cntlid": 27, 00:18:44.947 "qid": 0, 00:18:44.947 "state": "enabled", 00:18:44.947 "thread": "nvmf_tgt_poll_group_000", 00:18:44.947 "listen_address": { 00:18:44.947 "trtype": "TCP", 00:18:44.947 "adrfam": "IPv4", 00:18:44.947 "traddr": "10.0.0.2", 00:18:44.947 "trsvcid": "4420" 00:18:44.947 }, 00:18:44.947 "peer_address": { 00:18:44.947 "trtype": "TCP", 00:18:44.947 "adrfam": "IPv4", 00:18:44.947 "traddr": "10.0.0.1", 00:18:44.947 "trsvcid": "56422" 00:18:44.947 }, 00:18:44.947 "auth": { 00:18:44.947 "state": "completed", 00:18:44.947 "digest": "sha256", 00:18:44.947 "dhgroup": "ffdhe4096" 00:18:44.947 } 00:18:44.947 } 00:18:44.947 ]' 00:18:44.947 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.205 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.205 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.205 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:45.205 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.205 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.205 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.205 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.463 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:18:46.396 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.396 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.396 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.396 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.396 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.396 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.396 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:46.396 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:46.654 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:46.654 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.654 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.654 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:46.654 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.654 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.654 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.654 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.654 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.654 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.654 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.654 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.220 00:18:47.220 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.220 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.220 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.220 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.220 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.220 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.220 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.220 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.220 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.220 { 00:18:47.220 "cntlid": 29, 00:18:47.220 "qid": 0, 00:18:47.220 "state": "enabled", 00:18:47.220 "thread": "nvmf_tgt_poll_group_000", 00:18:47.220 "listen_address": { 00:18:47.220 "trtype": "TCP", 00:18:47.220 "adrfam": "IPv4", 00:18:47.220 "traddr": "10.0.0.2", 00:18:47.220 "trsvcid": "4420" 00:18:47.220 }, 00:18:47.220 "peer_address": { 00:18:47.220 "trtype": "TCP", 00:18:47.220 "adrfam": "IPv4", 00:18:47.220 "traddr": "10.0.0.1", 00:18:47.220 "trsvcid": "56434" 00:18:47.220 }, 00:18:47.220 "auth": { 00:18:47.220 "state": "completed", 00:18:47.220 "digest": "sha256", 00:18:47.220 "dhgroup": "ffdhe4096" 00:18:47.220 } 00:18:47.220 } 00:18:47.220 ]' 00:18:47.220 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.478 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.478 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.478 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:47.478 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.478 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.478 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.478 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.735 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:18:48.669 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.669 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.669 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.669 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.669 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.669 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.669 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:48.669 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:48.927 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:48.927 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.927 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.927 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:48.927 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.927 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.927 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:48.927 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.927 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.927 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.927 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.927 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.185 00:18:49.443 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.443 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.443 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.701 { 00:18:49.701 "cntlid": 31, 00:18:49.701 "qid": 0, 00:18:49.701 "state": "enabled", 00:18:49.701 "thread": "nvmf_tgt_poll_group_000", 00:18:49.701 "listen_address": { 00:18:49.701 "trtype": "TCP", 00:18:49.701 "adrfam": "IPv4", 00:18:49.701 "traddr": "10.0.0.2", 00:18:49.701 "trsvcid": "4420" 00:18:49.701 }, 00:18:49.701 "peer_address": { 00:18:49.701 "trtype": "TCP", 00:18:49.701 "adrfam": "IPv4", 00:18:49.701 "traddr": "10.0.0.1", 00:18:49.701 "trsvcid": "56472" 00:18:49.701 }, 00:18:49.701 "auth": { 00:18:49.701 "state": "completed", 00:18:49.701 "digest": "sha256", 00:18:49.701 "dhgroup": "ffdhe4096" 00:18:49.701 } 00:18:49.701 } 00:18:49.701 ]' 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.701 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.959 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:18:50.896 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.896 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.896 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.896 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.896 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.896 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.896 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.896 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:50.896 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:51.191 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:51.191 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.191 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.191 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:51.191 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:51.191 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.191 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.191 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.191 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.191 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.191 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.191 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.757 00:18:51.757 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.757 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.757 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.015 { 00:18:52.015 "cntlid": 33, 00:18:52.015 "qid": 0, 00:18:52.015 "state": "enabled", 00:18:52.015 "thread": "nvmf_tgt_poll_group_000", 00:18:52.015 "listen_address": { 00:18:52.015 "trtype": "TCP", 00:18:52.015 "adrfam": "IPv4", 00:18:52.015 "traddr": "10.0.0.2", 00:18:52.015 "trsvcid": "4420" 00:18:52.015 }, 00:18:52.015 "peer_address": { 00:18:52.015 "trtype": "TCP", 00:18:52.015 "adrfam": "IPv4", 00:18:52.015 "traddr": "10.0.0.1", 00:18:52.015 "trsvcid": "56498" 00:18:52.015 }, 00:18:52.015 "auth": { 00:18:52.015 "state": "completed", 00:18:52.015 "digest": "sha256", 00:18:52.015 "dhgroup": "ffdhe6144" 00:18:52.015 } 00:18:52.015 } 00:18:52.015 ]' 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.015 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.273 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.647 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.214 00:18:54.214 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.214 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.214 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.472 { 00:18:54.472 "cntlid": 35, 00:18:54.472 "qid": 0, 00:18:54.472 "state": "enabled", 00:18:54.472 "thread": "nvmf_tgt_poll_group_000", 00:18:54.472 "listen_address": { 00:18:54.472 "trtype": "TCP", 00:18:54.472 "adrfam": "IPv4", 00:18:54.472 "traddr": "10.0.0.2", 00:18:54.472 "trsvcid": "4420" 00:18:54.472 }, 00:18:54.472 "peer_address": { 00:18:54.472 "trtype": "TCP", 00:18:54.472 "adrfam": "IPv4", 00:18:54.472 "traddr": "10.0.0.1", 00:18:54.472 "trsvcid": "55774" 00:18:54.472 }, 00:18:54.472 "auth": { 00:18:54.472 "state": "completed", 00:18:54.472 "digest": "sha256", 00:18:54.472 "dhgroup": "ffdhe6144" 00:18:54.472 } 00:18:54.472 } 00:18:54.472 ]' 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.472 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.730 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:18:55.664 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.664 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.664 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.664 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.922 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.922 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.922 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:55.922 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:55.922 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:55.922 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.922 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.922 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:55.922 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:55.922 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.922 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.922 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.922 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.746 00:18:56.746 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.746 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.746 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.746 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.746 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.746 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.746 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.746 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.746 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.746 { 00:18:56.746 "cntlid": 37, 00:18:56.746 "qid": 0, 00:18:56.746 "state": "enabled", 00:18:56.746 "thread": "nvmf_tgt_poll_group_000", 00:18:56.746 "listen_address": { 00:18:56.746 "trtype": "TCP", 00:18:56.746 "adrfam": "IPv4", 00:18:56.746 "traddr": "10.0.0.2", 00:18:56.746 "trsvcid": "4420" 00:18:56.746 }, 00:18:56.746 "peer_address": { 00:18:56.746 "trtype": "TCP", 00:18:56.746 "adrfam": "IPv4", 00:18:56.746 "traddr": "10.0.0.1", 00:18:56.746 "trsvcid": "55786" 00:18:56.746 }, 00:18:56.746 "auth": { 00:18:56.746 "state": "completed", 00:18:56.747 "digest": "sha256", 00:18:56.747 "dhgroup": "ffdhe6144" 00:18:56.747 } 00:18:56.747 } 00:18:56.747 ]' 00:18:56.747 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.005 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.005 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.005 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.005 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.005 18:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.005 18:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.005 18:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.288 18:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:18:58.229 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.229 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.230 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.230 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.230 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.230 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.230 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.230 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.487 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:58.487 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.487 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.487 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:58.487 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.487 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.487 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:58.487 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.487 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.487 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.487 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.487 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.052 00:18:59.052 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.052 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.052 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.310 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.310 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.310 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.310 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.310 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.310 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.310 { 00:18:59.310 "cntlid": 39, 00:18:59.310 "qid": 0, 00:18:59.310 "state": "enabled", 00:18:59.310 "thread": "nvmf_tgt_poll_group_000", 00:18:59.310 "listen_address": { 00:18:59.310 "trtype": "TCP", 00:18:59.310 "adrfam": "IPv4", 00:18:59.310 "traddr": "10.0.0.2", 00:18:59.310 "trsvcid": "4420" 00:18:59.310 }, 00:18:59.310 "peer_address": { 00:18:59.310 "trtype": "TCP", 00:18:59.310 "adrfam": "IPv4", 00:18:59.310 "traddr": "10.0.0.1", 00:18:59.310 "trsvcid": "55800" 00:18:59.310 }, 00:18:59.310 "auth": { 00:18:59.310 "state": "completed", 00:18:59.310 "digest": "sha256", 00:18:59.310 "dhgroup": "ffdhe6144" 00:18:59.310 } 00:18:59.310 } 00:18:59.310 ]' 00:18:59.310 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.310 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.310 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.568 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.568 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.568 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.568 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.568 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.826 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:19:00.759 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.759 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.759 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.759 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.759 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.759 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.759 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.759 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:00.759 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:01.017 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:01.017 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.017 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.017 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:01.017 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.017 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.017 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.017 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.017 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.017 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.017 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.017 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.950 00:19:01.950 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.950 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.950 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.208 { 00:19:02.208 "cntlid": 41, 00:19:02.208 "qid": 0, 00:19:02.208 "state": "enabled", 00:19:02.208 "thread": "nvmf_tgt_poll_group_000", 00:19:02.208 "listen_address": { 00:19:02.208 "trtype": "TCP", 00:19:02.208 "adrfam": "IPv4", 00:19:02.208 "traddr": "10.0.0.2", 00:19:02.208 "trsvcid": "4420" 00:19:02.208 }, 00:19:02.208 "peer_address": { 00:19:02.208 "trtype": "TCP", 00:19:02.208 "adrfam": "IPv4", 00:19:02.208 "traddr": "10.0.0.1", 00:19:02.208 "trsvcid": "55810" 00:19:02.208 }, 00:19:02.208 "auth": { 00:19:02.208 "state": "completed", 00:19:02.208 "digest": "sha256", 00:19:02.208 "dhgroup": "ffdhe8192" 00:19:02.208 } 00:19:02.208 } 00:19:02.208 ]' 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.208 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.465 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.838 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.771 00:19:04.771 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.771 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.771 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.028 { 00:19:05.028 "cntlid": 43, 00:19:05.028 "qid": 0, 00:19:05.028 "state": "enabled", 00:19:05.028 "thread": "nvmf_tgt_poll_group_000", 00:19:05.028 "listen_address": { 00:19:05.028 "trtype": "TCP", 00:19:05.028 "adrfam": "IPv4", 00:19:05.028 "traddr": "10.0.0.2", 00:19:05.028 "trsvcid": "4420" 00:19:05.028 }, 00:19:05.028 "peer_address": { 00:19:05.028 "trtype": "TCP", 00:19:05.028 "adrfam": "IPv4", 00:19:05.028 "traddr": "10.0.0.1", 00:19:05.028 "trsvcid": "33280" 00:19:05.028 }, 00:19:05.028 "auth": { 00:19:05.028 "state": "completed", 00:19:05.028 "digest": "sha256", 00:19:05.028 "dhgroup": "ffdhe8192" 00:19:05.028 } 00:19:05.028 } 00:19:05.028 ]' 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.028 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.286 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:19:06.218 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.218 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.218 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.218 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.218 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.218 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.218 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:06.218 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:06.476 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:06.476 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.476 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.476 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:06.476 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.476 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.476 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.476 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.476 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.733 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.733 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.733 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.663 00:19:07.663 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.663 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.663 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.663 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.663 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.663 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.663 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.663 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.663 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.663 { 00:19:07.663 "cntlid": 45, 00:19:07.663 "qid": 0, 00:19:07.663 "state": "enabled", 00:19:07.663 "thread": "nvmf_tgt_poll_group_000", 00:19:07.663 "listen_address": { 00:19:07.663 "trtype": "TCP", 00:19:07.663 "adrfam": "IPv4", 00:19:07.663 "traddr": "10.0.0.2", 00:19:07.663 "trsvcid": "4420" 00:19:07.663 }, 00:19:07.663 "peer_address": { 00:19:07.663 "trtype": "TCP", 00:19:07.663 "adrfam": "IPv4", 00:19:07.663 "traddr": "10.0.0.1", 00:19:07.663 "trsvcid": "33316" 00:19:07.663 }, 00:19:07.663 "auth": { 00:19:07.663 "state": "completed", 00:19:07.663 "digest": "sha256", 00:19:07.663 "dhgroup": "ffdhe8192" 00:19:07.663 } 00:19:07.663 } 00:19:07.663 ]' 00:19:07.663 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.920 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.920 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.920 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.921 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.921 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.921 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.921 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.210 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:19:09.142 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.142 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.142 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.142 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.142 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.142 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.142 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:09.142 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:09.400 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:09.400 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.400 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.400 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:09.400 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:09.400 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.400 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:09.400 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.400 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.400 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.400 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.400 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.334 00:19:10.334 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.334 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.334 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.592 { 00:19:10.592 "cntlid": 47, 00:19:10.592 "qid": 0, 00:19:10.592 "state": "enabled", 00:19:10.592 "thread": "nvmf_tgt_poll_group_000", 00:19:10.592 "listen_address": { 00:19:10.592 "trtype": "TCP", 00:19:10.592 "adrfam": "IPv4", 00:19:10.592 "traddr": "10.0.0.2", 00:19:10.592 "trsvcid": "4420" 00:19:10.592 }, 00:19:10.592 "peer_address": { 00:19:10.592 "trtype": "TCP", 00:19:10.592 "adrfam": "IPv4", 00:19:10.592 "traddr": "10.0.0.1", 00:19:10.592 "trsvcid": "33350" 00:19:10.592 }, 00:19:10.592 "auth": { 00:19:10.592 "state": "completed", 00:19:10.592 "digest": "sha256", 00:19:10.592 "dhgroup": "ffdhe8192" 00:19:10.592 } 00:19:10.592 } 00:19:10.592 ]' 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.592 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.850 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:19:11.784 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.784 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.784 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.784 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.784 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.784 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:11.784 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.784 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.784 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:11.784 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:12.041 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:12.041 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.041 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:12.041 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:12.041 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:12.041 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.041 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.041 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.041 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.041 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.041 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.041 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.607 00:19:12.607 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.607 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.607 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.865 { 00:19:12.865 "cntlid": 49, 00:19:12.865 "qid": 0, 00:19:12.865 "state": "enabled", 00:19:12.865 "thread": "nvmf_tgt_poll_group_000", 00:19:12.865 "listen_address": { 00:19:12.865 "trtype": "TCP", 00:19:12.865 "adrfam": "IPv4", 00:19:12.865 "traddr": "10.0.0.2", 00:19:12.865 "trsvcid": "4420" 00:19:12.865 }, 00:19:12.865 "peer_address": { 00:19:12.865 "trtype": "TCP", 00:19:12.865 "adrfam": "IPv4", 00:19:12.865 "traddr": "10.0.0.1", 00:19:12.865 "trsvcid": "33384" 00:19:12.865 }, 00:19:12.865 "auth": { 00:19:12.865 "state": "completed", 00:19:12.865 "digest": "sha384", 00:19:12.865 "dhgroup": "null" 00:19:12.865 } 00:19:12.865 } 00:19:12.865 ]' 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.124 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:19:14.058 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.059 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.059 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.059 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.059 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.059 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.059 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:14.059 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:14.317 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:14.317 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.317 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:14.317 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:14.317 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:14.317 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.317 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.317 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.317 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.317 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.317 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.317 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.575 00:19:14.575 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.575 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.575 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.833 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.833 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.833 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.833 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.833 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.833 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.833 { 00:19:14.833 "cntlid": 51, 00:19:14.833 "qid": 0, 00:19:14.833 "state": "enabled", 00:19:14.833 "thread": "nvmf_tgt_poll_group_000", 00:19:14.833 "listen_address": { 00:19:14.833 "trtype": "TCP", 00:19:14.833 "adrfam": "IPv4", 00:19:14.833 "traddr": "10.0.0.2", 00:19:14.833 "trsvcid": "4420" 00:19:14.833 }, 00:19:14.833 "peer_address": { 00:19:14.833 "trtype": "TCP", 00:19:14.833 "adrfam": "IPv4", 00:19:14.833 "traddr": "10.0.0.1", 00:19:14.833 "trsvcid": "34336" 00:19:14.833 }, 00:19:14.833 "auth": { 00:19:14.833 "state": "completed", 00:19:14.833 "digest": "sha384", 00:19:14.833 "dhgroup": "null" 00:19:14.833 } 00:19:14.833 } 00:19:14.833 ]' 00:19:14.833 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.833 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:14.833 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.091 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:15.091 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.091 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.091 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.091 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.349 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:19:16.282 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.282 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.282 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.282 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.282 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.282 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.282 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:16.282 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:16.540 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:16.540 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.540 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.540 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:16.540 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.540 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.540 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.540 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.540 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.540 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.540 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.540 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.106 00:19:17.106 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.106 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.106 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.106 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.106 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.106 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.106 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.363 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.363 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.363 { 00:19:17.363 "cntlid": 53, 00:19:17.363 "qid": 0, 00:19:17.363 "state": "enabled", 00:19:17.363 "thread": "nvmf_tgt_poll_group_000", 00:19:17.363 "listen_address": { 00:19:17.363 "trtype": "TCP", 00:19:17.363 "adrfam": "IPv4", 00:19:17.363 "traddr": "10.0.0.2", 00:19:17.363 "trsvcid": "4420" 00:19:17.363 }, 00:19:17.363 "peer_address": { 00:19:17.363 "trtype": "TCP", 00:19:17.363 "adrfam": "IPv4", 00:19:17.363 "traddr": "10.0.0.1", 00:19:17.363 "trsvcid": "34372" 00:19:17.363 }, 00:19:17.363 "auth": { 00:19:17.363 "state": "completed", 00:19:17.363 "digest": "sha384", 00:19:17.363 "dhgroup": "null" 00:19:17.363 } 00:19:17.363 } 00:19:17.363 ]' 00:19:17.363 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.363 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.363 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.363 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:17.363 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.363 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.363 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.363 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.620 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:19:18.552 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.552 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.552 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.552 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.552 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.552 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.552 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:18.552 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:18.810 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:18.810 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.810 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:18.810 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:18.810 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.810 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.810 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:18.810 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.810 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.810 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.810 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.810 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.376 00:19:19.376 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.376 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.376 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.376 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.376 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.376 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.376 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.376 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.376 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.376 { 00:19:19.376 "cntlid": 55, 00:19:19.376 "qid": 0, 00:19:19.376 "state": "enabled", 00:19:19.376 "thread": "nvmf_tgt_poll_group_000", 00:19:19.376 "listen_address": { 00:19:19.376 "trtype": "TCP", 00:19:19.376 "adrfam": "IPv4", 00:19:19.376 "traddr": "10.0.0.2", 00:19:19.376 "trsvcid": "4420" 00:19:19.376 }, 00:19:19.376 "peer_address": { 00:19:19.376 "trtype": "TCP", 00:19:19.376 "adrfam": "IPv4", 00:19:19.376 "traddr": "10.0.0.1", 00:19:19.376 "trsvcid": "34400" 00:19:19.376 }, 00:19:19.376 "auth": { 00:19:19.376 "state": "completed", 00:19:19.376 "digest": "sha384", 00:19:19.376 "dhgroup": "null" 00:19:19.376 } 00:19:19.376 } 00:19:19.376 ]' 00:19:19.376 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.634 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.634 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.634 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:19.634 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.634 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.634 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.634 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.892 18:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:19:20.825 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.825 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.825 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.825 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.825 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.825 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.825 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.825 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:20.825 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:21.083 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:21.083 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.083 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.083 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:21.083 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:21.083 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.083 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.083 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.083 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.083 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.083 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.083 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.341 00:19:21.341 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.341 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.341 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.598 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.598 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.598 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.598 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.598 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.598 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.598 { 00:19:21.598 "cntlid": 57, 00:19:21.598 "qid": 0, 00:19:21.598 "state": "enabled", 00:19:21.598 "thread": "nvmf_tgt_poll_group_000", 00:19:21.598 "listen_address": { 00:19:21.598 "trtype": "TCP", 00:19:21.598 "adrfam": "IPv4", 00:19:21.598 "traddr": "10.0.0.2", 00:19:21.598 "trsvcid": "4420" 00:19:21.598 }, 00:19:21.598 "peer_address": { 00:19:21.598 "trtype": "TCP", 00:19:21.598 "adrfam": "IPv4", 00:19:21.598 "traddr": "10.0.0.1", 00:19:21.598 "trsvcid": "34416" 00:19:21.598 }, 00:19:21.598 "auth": { 00:19:21.598 "state": "completed", 00:19:21.598 "digest": "sha384", 00:19:21.598 "dhgroup": "ffdhe2048" 00:19:21.598 } 00:19:21.598 } 00:19:21.598 ]' 00:19:21.598 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.598 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:21.598 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.856 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:21.856 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.856 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.856 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.856 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.113 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:19:23.042 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.042 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.042 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.042 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.042 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.042 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.042 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:23.042 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:23.300 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:23.300 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.300 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:23.300 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:23.300 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:23.300 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.300 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.300 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.300 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.300 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.300 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.300 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.558 00:19:23.558 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.558 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.558 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.815 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.815 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.815 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.815 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.815 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.815 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.815 { 00:19:23.815 "cntlid": 59, 00:19:23.815 "qid": 0, 00:19:23.815 "state": "enabled", 00:19:23.815 "thread": "nvmf_tgt_poll_group_000", 00:19:23.815 "listen_address": { 00:19:23.815 "trtype": "TCP", 00:19:23.815 "adrfam": "IPv4", 00:19:23.815 "traddr": "10.0.0.2", 00:19:23.815 "trsvcid": "4420" 00:19:23.815 }, 00:19:23.815 "peer_address": { 00:19:23.815 "trtype": "TCP", 00:19:23.815 "adrfam": "IPv4", 00:19:23.815 "traddr": "10.0.0.1", 00:19:23.815 "trsvcid": "41062" 00:19:23.815 }, 00:19:23.815 "auth": { 00:19:23.815 "state": "completed", 00:19:23.815 "digest": "sha384", 00:19:23.815 "dhgroup": "ffdhe2048" 00:19:23.815 } 00:19:23.815 } 00:19:23.815 ]' 00:19:23.815 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.815 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.815 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.071 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:24.071 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.071 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.071 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.071 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.331 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:19:25.297 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.297 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.297 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.297 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.297 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.297 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.297 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:25.297 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:25.555 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:25.555 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.555 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:25.555 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:25.555 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:25.555 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.555 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.555 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.555 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.555 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.555 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.555 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.812 00:19:25.812 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.812 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.812 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.070 { 00:19:26.070 "cntlid": 61, 00:19:26.070 "qid": 0, 00:19:26.070 "state": "enabled", 00:19:26.070 "thread": "nvmf_tgt_poll_group_000", 00:19:26.070 "listen_address": { 00:19:26.070 "trtype": "TCP", 00:19:26.070 "adrfam": "IPv4", 00:19:26.070 "traddr": "10.0.0.2", 00:19:26.070 "trsvcid": "4420" 00:19:26.070 }, 00:19:26.070 "peer_address": { 00:19:26.070 "trtype": "TCP", 00:19:26.070 "adrfam": "IPv4", 00:19:26.070 "traddr": "10.0.0.1", 00:19:26.070 "trsvcid": "41088" 00:19:26.070 }, 00:19:26.070 "auth": { 00:19:26.070 "state": "completed", 00:19:26.070 "digest": "sha384", 00:19:26.070 "dhgroup": "ffdhe2048" 00:19:26.070 } 00:19:26.070 } 00:19:26.070 ]' 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.070 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.328 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.701 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.958 00:19:27.958 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.958 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.958 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.215 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.215 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.215 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.215 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.215 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.215 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.215 { 00:19:28.215 "cntlid": 63, 00:19:28.215 "qid": 0, 00:19:28.215 "state": "enabled", 00:19:28.215 "thread": "nvmf_tgt_poll_group_000", 00:19:28.215 "listen_address": { 00:19:28.215 "trtype": "TCP", 00:19:28.215 "adrfam": "IPv4", 00:19:28.215 "traddr": "10.0.0.2", 00:19:28.215 "trsvcid": "4420" 00:19:28.215 }, 00:19:28.215 "peer_address": { 00:19:28.215 "trtype": "TCP", 00:19:28.215 "adrfam": "IPv4", 00:19:28.215 "traddr": "10.0.0.1", 00:19:28.215 "trsvcid": "41110" 00:19:28.215 }, 00:19:28.215 "auth": { 00:19:28.215 "state": "completed", 00:19:28.215 "digest": "sha384", 00:19:28.215 "dhgroup": "ffdhe2048" 00:19:28.215 } 00:19:28.215 } 00:19:28.215 ]' 00:19:28.215 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.215 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.215 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.472 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:28.472 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.472 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.472 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.472 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.729 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:19:29.661 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.661 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.661 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.661 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.661 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.661 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.661 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.661 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:29.661 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:29.919 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:29.919 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.919 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:29.919 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:29.919 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:29.919 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.919 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.919 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.919 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.919 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.919 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.919 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.177 00:19:30.177 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.177 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.177 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.435 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.435 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.435 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.435 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.435 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.435 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.435 { 00:19:30.435 "cntlid": 65, 00:19:30.435 "qid": 0, 00:19:30.435 "state": "enabled", 00:19:30.435 "thread": "nvmf_tgt_poll_group_000", 00:19:30.435 "listen_address": { 00:19:30.435 "trtype": "TCP", 00:19:30.435 "adrfam": "IPv4", 00:19:30.435 "traddr": "10.0.0.2", 00:19:30.435 "trsvcid": "4420" 00:19:30.435 }, 00:19:30.435 "peer_address": { 00:19:30.435 "trtype": "TCP", 00:19:30.435 "adrfam": "IPv4", 00:19:30.435 "traddr": "10.0.0.1", 00:19:30.435 "trsvcid": "41136" 00:19:30.435 }, 00:19:30.435 "auth": { 00:19:30.435 "state": "completed", 00:19:30.435 "digest": "sha384", 00:19:30.435 "dhgroup": "ffdhe3072" 00:19:30.435 } 00:19:30.435 } 00:19:30.435 ]' 00:19:30.435 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.435 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.435 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.692 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:30.692 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.692 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.692 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.692 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.950 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:19:31.882 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.882 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.882 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.882 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.882 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.882 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.882 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:31.882 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:32.140 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:32.140 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.140 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:32.140 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:32.140 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:32.140 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.140 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.140 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.140 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.140 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.140 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.140 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.706 00:19:32.706 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.706 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.706 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.706 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.706 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.706 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.706 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.706 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.706 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.706 { 00:19:32.706 "cntlid": 67, 00:19:32.706 "qid": 0, 00:19:32.706 "state": "enabled", 00:19:32.706 "thread": "nvmf_tgt_poll_group_000", 00:19:32.706 "listen_address": { 00:19:32.706 "trtype": "TCP", 00:19:32.706 "adrfam": "IPv4", 00:19:32.706 "traddr": "10.0.0.2", 00:19:32.706 "trsvcid": "4420" 00:19:32.706 }, 00:19:32.706 "peer_address": { 00:19:32.706 "trtype": "TCP", 00:19:32.706 "adrfam": "IPv4", 00:19:32.706 "traddr": "10.0.0.1", 00:19:32.706 "trsvcid": "41164" 00:19:32.706 }, 00:19:32.706 "auth": { 00:19:32.706 "state": "completed", 00:19:32.706 "digest": "sha384", 00:19:32.706 "dhgroup": "ffdhe3072" 00:19:32.706 } 00:19:32.706 } 00:19:32.706 ]' 00:19:32.706 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.964 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.964 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.964 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:32.964 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.964 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.964 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.964 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.222 18:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:19:34.156 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.156 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.156 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.156 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.156 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.156 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.156 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:34.156 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:34.414 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:34.414 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.414 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.414 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:34.414 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:34.414 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.414 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.414 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.414 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.414 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.414 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.414 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.981 00:19:34.981 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.981 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.981 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.981 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.981 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.981 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.981 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.981 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.981 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.981 { 00:19:34.981 "cntlid": 69, 00:19:34.981 "qid": 0, 00:19:34.981 "state": "enabled", 00:19:34.981 "thread": "nvmf_tgt_poll_group_000", 00:19:34.981 "listen_address": { 00:19:34.981 "trtype": "TCP", 00:19:34.981 "adrfam": "IPv4", 00:19:34.981 "traddr": "10.0.0.2", 00:19:34.981 "trsvcid": "4420" 00:19:34.981 }, 00:19:34.981 "peer_address": { 00:19:34.981 "trtype": "TCP", 00:19:34.981 "adrfam": "IPv4", 00:19:34.981 "traddr": "10.0.0.1", 00:19:34.981 "trsvcid": "60140" 00:19:34.981 }, 00:19:34.981 "auth": { 00:19:34.981 "state": "completed", 00:19:34.981 "digest": "sha384", 00:19:34.981 "dhgroup": "ffdhe3072" 00:19:34.981 } 00:19:34.981 } 00:19:34.981 ]' 00:19:34.981 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.238 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.238 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.238 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:35.238 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.239 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.239 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.239 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.496 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:19:36.429 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.429 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.429 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.429 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.429 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.429 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.429 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:36.429 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:36.995 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:36.995 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.995 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:36.995 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:36.995 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.995 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.995 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:36.995 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.995 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.995 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.995 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.995 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.254 00:19:37.254 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.254 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.254 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.511 { 00:19:37.511 "cntlid": 71, 00:19:37.511 "qid": 0, 00:19:37.511 "state": "enabled", 00:19:37.511 "thread": "nvmf_tgt_poll_group_000", 00:19:37.511 "listen_address": { 00:19:37.511 "trtype": "TCP", 00:19:37.511 "adrfam": "IPv4", 00:19:37.511 "traddr": "10.0.0.2", 00:19:37.511 "trsvcid": "4420" 00:19:37.511 }, 00:19:37.511 "peer_address": { 00:19:37.511 "trtype": "TCP", 00:19:37.511 "adrfam": "IPv4", 00:19:37.511 "traddr": "10.0.0.1", 00:19:37.511 "trsvcid": "60174" 00:19:37.511 }, 00:19:37.511 "auth": { 00:19:37.511 "state": "completed", 00:19:37.511 "digest": "sha384", 00:19:37.511 "dhgroup": "ffdhe3072" 00:19:37.511 } 00:19:37.511 } 00:19:37.511 ]' 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.511 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.768 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:19:38.700 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.700 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.700 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.700 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.700 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.700 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.700 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.700 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:38.700 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:39.263 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:39.263 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.263 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.263 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:39.263 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.263 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.263 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.263 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.263 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.263 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.263 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.263 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.521 00:19:39.521 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.521 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.521 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.779 { 00:19:39.779 "cntlid": 73, 00:19:39.779 "qid": 0, 00:19:39.779 "state": "enabled", 00:19:39.779 "thread": "nvmf_tgt_poll_group_000", 00:19:39.779 "listen_address": { 00:19:39.779 "trtype": "TCP", 00:19:39.779 "adrfam": "IPv4", 00:19:39.779 "traddr": "10.0.0.2", 00:19:39.779 "trsvcid": "4420" 00:19:39.779 }, 00:19:39.779 "peer_address": { 00:19:39.779 "trtype": "TCP", 00:19:39.779 "adrfam": "IPv4", 00:19:39.779 "traddr": "10.0.0.1", 00:19:39.779 "trsvcid": "60206" 00:19:39.779 }, 00:19:39.779 "auth": { 00:19:39.779 "state": "completed", 00:19:39.779 "digest": "sha384", 00:19:39.779 "dhgroup": "ffdhe4096" 00:19:39.779 } 00:19:39.779 } 00:19:39.779 ]' 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.779 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.345 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.328 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.586 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.586 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.586 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.844 00:19:41.844 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.844 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.844 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.103 { 00:19:42.103 "cntlid": 75, 00:19:42.103 "qid": 0, 00:19:42.103 "state": "enabled", 00:19:42.103 "thread": "nvmf_tgt_poll_group_000", 00:19:42.103 "listen_address": { 00:19:42.103 "trtype": "TCP", 00:19:42.103 "adrfam": "IPv4", 00:19:42.103 "traddr": "10.0.0.2", 00:19:42.103 "trsvcid": "4420" 00:19:42.103 }, 00:19:42.103 "peer_address": { 00:19:42.103 "trtype": "TCP", 00:19:42.103 "adrfam": "IPv4", 00:19:42.103 "traddr": "10.0.0.1", 00:19:42.103 "trsvcid": "60236" 00:19:42.103 }, 00:19:42.103 "auth": { 00:19:42.103 "state": "completed", 00:19:42.103 "digest": "sha384", 00:19:42.103 "dhgroup": "ffdhe4096" 00:19:42.103 } 00:19:42.103 } 00:19:42.103 ]' 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.103 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.669 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:19:43.602 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.602 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.602 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.602 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.602 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.602 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.602 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:43.602 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:43.860 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:43.860 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.860 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:43.860 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:43.860 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:43.860 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.860 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.860 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.860 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.860 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.860 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.860 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.118 00:19:44.118 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.118 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.118 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.376 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.376 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.376 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.376 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.376 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.376 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.376 { 00:19:44.376 "cntlid": 77, 00:19:44.376 "qid": 0, 00:19:44.376 "state": "enabled", 00:19:44.376 "thread": "nvmf_tgt_poll_group_000", 00:19:44.376 "listen_address": { 00:19:44.376 "trtype": "TCP", 00:19:44.376 "adrfam": "IPv4", 00:19:44.376 "traddr": "10.0.0.2", 00:19:44.376 "trsvcid": "4420" 00:19:44.376 }, 00:19:44.376 "peer_address": { 00:19:44.376 "trtype": "TCP", 00:19:44.376 "adrfam": "IPv4", 00:19:44.376 "traddr": "10.0.0.1", 00:19:44.376 "trsvcid": "38082" 00:19:44.376 }, 00:19:44.376 "auth": { 00:19:44.376 "state": "completed", 00:19:44.376 "digest": "sha384", 00:19:44.376 "dhgroup": "ffdhe4096" 00:19:44.376 } 00:19:44.376 } 00:19:44.376 ]' 00:19:44.376 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.376 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.376 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.376 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.376 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.634 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.634 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.634 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.892 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:19:45.825 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.825 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.825 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.825 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.825 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.825 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.825 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:45.825 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:46.084 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:46.084 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.084 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.084 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:46.084 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.084 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.084 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:46.084 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.084 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.084 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.084 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.084 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.342 00:19:46.342 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.342 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.342 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.600 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.600 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.600 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.600 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.600 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.600 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.600 { 00:19:46.600 "cntlid": 79, 00:19:46.600 "qid": 0, 00:19:46.600 "state": "enabled", 00:19:46.600 "thread": "nvmf_tgt_poll_group_000", 00:19:46.600 "listen_address": { 00:19:46.600 "trtype": "TCP", 00:19:46.600 "adrfam": "IPv4", 00:19:46.600 "traddr": "10.0.0.2", 00:19:46.600 "trsvcid": "4420" 00:19:46.600 }, 00:19:46.600 "peer_address": { 00:19:46.600 "trtype": "TCP", 00:19:46.600 "adrfam": "IPv4", 00:19:46.600 "traddr": "10.0.0.1", 00:19:46.600 "trsvcid": "38112" 00:19:46.600 }, 00:19:46.600 "auth": { 00:19:46.600 "state": "completed", 00:19:46.600 "digest": "sha384", 00:19:46.600 "dhgroup": "ffdhe4096" 00:19:46.600 } 00:19:46.600 } 00:19:46.600 ]' 00:19:46.600 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.858 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.858 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.858 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:46.858 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.858 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.858 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.858 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.116 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:19:48.046 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.046 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.046 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.046 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.046 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.046 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.046 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.046 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:48.046 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:48.304 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:48.304 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.304 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.304 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:48.304 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.304 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.304 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.304 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.304 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.304 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.304 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.304 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.870 00:19:48.870 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.870 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.870 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.127 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.127 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.127 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.127 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.127 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.127 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.127 { 00:19:49.127 "cntlid": 81, 00:19:49.127 "qid": 0, 00:19:49.127 "state": "enabled", 00:19:49.127 "thread": "nvmf_tgt_poll_group_000", 00:19:49.127 "listen_address": { 00:19:49.127 "trtype": "TCP", 00:19:49.127 "adrfam": "IPv4", 00:19:49.127 "traddr": "10.0.0.2", 00:19:49.127 "trsvcid": "4420" 00:19:49.127 }, 00:19:49.127 "peer_address": { 00:19:49.127 "trtype": "TCP", 00:19:49.127 "adrfam": "IPv4", 00:19:49.127 "traddr": "10.0.0.1", 00:19:49.127 "trsvcid": "38122" 00:19:49.127 }, 00:19:49.127 "auth": { 00:19:49.127 "state": "completed", 00:19:49.127 "digest": "sha384", 00:19:49.127 "dhgroup": "ffdhe6144" 00:19:49.127 } 00:19:49.127 } 00:19:49.127 ]' 00:19:49.127 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.385 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.385 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.385 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.385 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.385 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.385 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.385 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.643 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:19:50.578 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.578 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.578 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.578 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.578 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.578 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.578 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:50.578 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:50.835 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:50.835 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.835 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.835 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:50.835 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.835 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.835 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.835 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.835 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.835 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.835 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.835 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.400 00:19:51.400 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.400 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.400 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.657 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.657 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.657 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.657 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.657 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.657 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.657 { 00:19:51.657 "cntlid": 83, 00:19:51.657 "qid": 0, 00:19:51.657 "state": "enabled", 00:19:51.657 "thread": "nvmf_tgt_poll_group_000", 00:19:51.657 "listen_address": { 00:19:51.657 "trtype": "TCP", 00:19:51.657 "adrfam": "IPv4", 00:19:51.657 "traddr": "10.0.0.2", 00:19:51.657 "trsvcid": "4420" 00:19:51.657 }, 00:19:51.657 "peer_address": { 00:19:51.657 "trtype": "TCP", 00:19:51.657 "adrfam": "IPv4", 00:19:51.657 "traddr": "10.0.0.1", 00:19:51.657 "trsvcid": "38146" 00:19:51.657 }, 00:19:51.657 "auth": { 00:19:51.657 "state": "completed", 00:19:51.657 "digest": "sha384", 00:19:51.657 "dhgroup": "ffdhe6144" 00:19:51.657 } 00:19:51.657 } 00:19:51.658 ]' 00:19:51.658 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.915 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.915 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.915 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:51.915 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.915 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.915 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.915 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.173 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:19:53.104 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.104 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.104 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.104 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.104 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.104 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.104 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:53.104 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:53.360 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:53.361 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.361 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.361 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:53.361 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.361 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.361 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.361 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.361 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.361 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.361 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.361 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.925 00:19:53.925 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.925 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.925 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.183 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.183 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.183 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.183 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.183 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.183 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.183 { 00:19:54.183 "cntlid": 85, 00:19:54.183 "qid": 0, 00:19:54.183 "state": "enabled", 00:19:54.183 "thread": "nvmf_tgt_poll_group_000", 00:19:54.183 "listen_address": { 00:19:54.183 "trtype": "TCP", 00:19:54.183 "adrfam": "IPv4", 00:19:54.183 "traddr": "10.0.0.2", 00:19:54.183 "trsvcid": "4420" 00:19:54.183 }, 00:19:54.183 "peer_address": { 00:19:54.183 "trtype": "TCP", 00:19:54.183 "adrfam": "IPv4", 00:19:54.183 "traddr": "10.0.0.1", 00:19:54.183 "trsvcid": "48776" 00:19:54.183 }, 00:19:54.183 "auth": { 00:19:54.183 "state": "completed", 00:19:54.183 "digest": "sha384", 00:19:54.183 "dhgroup": "ffdhe6144" 00:19:54.183 } 00:19:54.183 } 00:19:54.183 ]' 00:19:54.183 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.441 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.441 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.441 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:54.441 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.441 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.441 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.441 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.698 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:19:55.662 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.662 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.662 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.662 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.662 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.662 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.662 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:55.662 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:55.920 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:55.920 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.920 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:55.920 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:55.920 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.920 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.920 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:55.920 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.920 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.920 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.920 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.920 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.485 00:19:56.486 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.486 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.486 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.743 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.743 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.743 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.743 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.743 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.743 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.743 { 00:19:56.743 "cntlid": 87, 00:19:56.743 "qid": 0, 00:19:56.743 "state": "enabled", 00:19:56.743 "thread": "nvmf_tgt_poll_group_000", 00:19:56.743 "listen_address": { 00:19:56.743 "trtype": "TCP", 00:19:56.743 "adrfam": "IPv4", 00:19:56.743 "traddr": "10.0.0.2", 00:19:56.743 "trsvcid": "4420" 00:19:56.743 }, 00:19:56.743 "peer_address": { 00:19:56.743 "trtype": "TCP", 00:19:56.743 "adrfam": "IPv4", 00:19:56.743 "traddr": "10.0.0.1", 00:19:56.743 "trsvcid": "48784" 00:19:56.743 }, 00:19:56.743 "auth": { 00:19:56.743 "state": "completed", 00:19:56.743 "digest": "sha384", 00:19:56.743 "dhgroup": "ffdhe6144" 00:19:56.743 } 00:19:56.743 } 00:19:56.743 ]' 00:19:56.743 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.743 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.743 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.001 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.001 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.001 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.001 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.001 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.259 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:19:58.192 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.192 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.192 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.192 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.192 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.192 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.192 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.192 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:58.192 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:58.450 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:58.450 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.450 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.450 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:58.450 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:58.450 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.450 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.450 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.450 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.450 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.450 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.450 18:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.382 00:19:59.382 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.382 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.382 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.639 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.639 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.639 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.639 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.639 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.639 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.639 { 00:19:59.639 "cntlid": 89, 00:19:59.639 "qid": 0, 00:19:59.639 "state": "enabled", 00:19:59.639 "thread": "nvmf_tgt_poll_group_000", 00:19:59.639 "listen_address": { 00:19:59.639 "trtype": "TCP", 00:19:59.639 "adrfam": "IPv4", 00:19:59.639 "traddr": "10.0.0.2", 00:19:59.639 "trsvcid": "4420" 00:19:59.639 }, 00:19:59.639 "peer_address": { 00:19:59.639 "trtype": "TCP", 00:19:59.639 "adrfam": "IPv4", 00:19:59.639 "traddr": "10.0.0.1", 00:19:59.639 "trsvcid": "48794" 00:19:59.639 }, 00:19:59.639 "auth": { 00:19:59.639 "state": "completed", 00:19:59.639 "digest": "sha384", 00:19:59.639 "dhgroup": "ffdhe8192" 00:19:59.639 } 00:19:59.639 } 00:19:59.639 ]' 00:19:59.639 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.639 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.639 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.896 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.896 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.896 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.896 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.896 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.154 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:20:01.085 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.085 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.085 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.085 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.085 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.085 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.085 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:01.085 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:01.343 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:01.343 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.343 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.343 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:01.343 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:01.343 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.343 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.343 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.343 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.343 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.343 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.343 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.276 00:20:02.276 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.276 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.276 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.276 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.276 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.276 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.276 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.276 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.276 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.276 { 00:20:02.276 "cntlid": 91, 00:20:02.276 "qid": 0, 00:20:02.276 "state": "enabled", 00:20:02.276 "thread": "nvmf_tgt_poll_group_000", 00:20:02.276 "listen_address": { 00:20:02.276 "trtype": "TCP", 00:20:02.276 "adrfam": "IPv4", 00:20:02.276 "traddr": "10.0.0.2", 00:20:02.276 "trsvcid": "4420" 00:20:02.276 }, 00:20:02.276 "peer_address": { 00:20:02.276 "trtype": "TCP", 00:20:02.276 "adrfam": "IPv4", 00:20:02.276 "traddr": "10.0.0.1", 00:20:02.276 "trsvcid": "48826" 00:20:02.276 }, 00:20:02.276 "auth": { 00:20:02.276 "state": "completed", 00:20:02.276 "digest": "sha384", 00:20:02.276 "dhgroup": "ffdhe8192" 00:20:02.276 } 00:20:02.276 } 00:20:02.276 ]' 00:20:02.276 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.533 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.533 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.533 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.533 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.533 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.533 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.533 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.790 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:20:03.723 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.723 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.724 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.724 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.724 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.724 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.724 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.724 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.982 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:03.982 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.982 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.982 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:03.982 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:03.982 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.982 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.982 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.982 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.982 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.982 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.982 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.916 00:20:04.916 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.916 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.916 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.174 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.174 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.174 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.174 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.174 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.174 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.174 { 00:20:05.174 "cntlid": 93, 00:20:05.174 "qid": 0, 00:20:05.174 "state": "enabled", 00:20:05.174 "thread": "nvmf_tgt_poll_group_000", 00:20:05.174 "listen_address": { 00:20:05.174 "trtype": "TCP", 00:20:05.174 "adrfam": "IPv4", 00:20:05.174 "traddr": "10.0.0.2", 00:20:05.174 "trsvcid": "4420" 00:20:05.174 }, 00:20:05.174 "peer_address": { 00:20:05.174 "trtype": "TCP", 00:20:05.174 "adrfam": "IPv4", 00:20:05.174 "traddr": "10.0.0.1", 00:20:05.174 "trsvcid": "34932" 00:20:05.174 }, 00:20:05.174 "auth": { 00:20:05.174 "state": "completed", 00:20:05.174 "digest": "sha384", 00:20:05.174 "dhgroup": "ffdhe8192" 00:20:05.174 } 00:20:05.174 } 00:20:05.174 ]' 00:20:05.174 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.174 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.174 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.174 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.174 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.432 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.432 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.432 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.690 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:20:06.623 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.623 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.623 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.623 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.623 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.623 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.623 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.623 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.882 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:06.882 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.882 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.882 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:06.882 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:06.882 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.882 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:06.882 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.882 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.882 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.882 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.882 18:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.817 00:20:07.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.817 { 00:20:07.817 "cntlid": 95, 00:20:07.817 "qid": 0, 00:20:07.817 "state": "enabled", 00:20:07.817 "thread": "nvmf_tgt_poll_group_000", 00:20:07.817 "listen_address": { 00:20:07.817 "trtype": "TCP", 00:20:07.817 "adrfam": "IPv4", 00:20:07.817 "traddr": "10.0.0.2", 00:20:07.817 "trsvcid": "4420" 00:20:07.817 }, 00:20:07.817 "peer_address": { 00:20:07.817 "trtype": "TCP", 00:20:07.817 "adrfam": "IPv4", 00:20:07.817 "traddr": "10.0.0.1", 00:20:07.817 "trsvcid": "34964" 00:20:07.817 }, 00:20:07.817 "auth": { 00:20:07.817 "state": "completed", 00:20:07.817 "digest": "sha384", 00:20:07.817 "dhgroup": "ffdhe8192" 00:20:07.817 } 00:20:07.817 } 00:20:07.817 ]' 00:20:07.817 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.075 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.075 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.075 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.075 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.075 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.075 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.075 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.332 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:20:09.265 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.265 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.265 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.265 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.265 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.265 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:09.265 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.265 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.265 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:09.265 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:09.523 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:09.523 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.523 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.523 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:09.523 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:09.523 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.523 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.523 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.523 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.523 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.523 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.523 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.782 00:20:09.782 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.782 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.782 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.040 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.041 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.041 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.041 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.041 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.041 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.041 { 00:20:10.041 "cntlid": 97, 00:20:10.041 "qid": 0, 00:20:10.041 "state": "enabled", 00:20:10.041 "thread": "nvmf_tgt_poll_group_000", 00:20:10.041 "listen_address": { 00:20:10.041 "trtype": "TCP", 00:20:10.041 "adrfam": "IPv4", 00:20:10.041 "traddr": "10.0.0.2", 00:20:10.041 "trsvcid": "4420" 00:20:10.041 }, 00:20:10.041 "peer_address": { 00:20:10.041 "trtype": "TCP", 00:20:10.041 "adrfam": "IPv4", 00:20:10.041 "traddr": "10.0.0.1", 00:20:10.041 "trsvcid": "35006" 00:20:10.041 }, 00:20:10.041 "auth": { 00:20:10.041 "state": "completed", 00:20:10.041 "digest": "sha512", 00:20:10.041 "dhgroup": "null" 00:20:10.041 } 00:20:10.041 } 00:20:10.041 ]' 00:20:10.041 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.041 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.041 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.298 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:10.298 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.298 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.298 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.298 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.556 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:20:11.538 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.538 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.538 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.538 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.538 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.538 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.538 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:11.538 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:11.796 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:11.796 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.796 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.796 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:11.796 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:11.796 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.796 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.796 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.796 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.796 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.796 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.796 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.053 00:20:12.053 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.053 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.053 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.311 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.311 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.311 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.311 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.311 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.311 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.311 { 00:20:12.311 "cntlid": 99, 00:20:12.311 "qid": 0, 00:20:12.311 "state": "enabled", 00:20:12.311 "thread": "nvmf_tgt_poll_group_000", 00:20:12.311 "listen_address": { 00:20:12.311 "trtype": "TCP", 00:20:12.311 "adrfam": "IPv4", 00:20:12.311 "traddr": "10.0.0.2", 00:20:12.311 "trsvcid": "4420" 00:20:12.311 }, 00:20:12.311 "peer_address": { 00:20:12.311 "trtype": "TCP", 00:20:12.311 "adrfam": "IPv4", 00:20:12.311 "traddr": "10.0.0.1", 00:20:12.311 "trsvcid": "35038" 00:20:12.311 }, 00:20:12.311 "auth": { 00:20:12.311 "state": "completed", 00:20:12.311 "digest": "sha512", 00:20:12.311 "dhgroup": "null" 00:20:12.311 } 00:20:12.311 } 00:20:12.311 ]' 00:20:12.311 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.311 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.311 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.568 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:12.568 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.568 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.569 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.569 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.825 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:20:13.757 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.757 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.757 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.757 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.757 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.757 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.757 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:13.757 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:14.014 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:14.015 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.015 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.015 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:14.015 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:14.015 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.015 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.015 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.015 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.015 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.015 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.015 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.272 00:20:14.272 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.272 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.272 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.530 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.530 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.530 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.530 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.530 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.530 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.530 { 00:20:14.530 "cntlid": 101, 00:20:14.530 "qid": 0, 00:20:14.530 "state": "enabled", 00:20:14.530 "thread": "nvmf_tgt_poll_group_000", 00:20:14.530 "listen_address": { 00:20:14.530 "trtype": "TCP", 00:20:14.530 "adrfam": "IPv4", 00:20:14.530 "traddr": "10.0.0.2", 00:20:14.530 "trsvcid": "4420" 00:20:14.530 }, 00:20:14.530 "peer_address": { 00:20:14.530 "trtype": "TCP", 00:20:14.530 "adrfam": "IPv4", 00:20:14.530 "traddr": "10.0.0.1", 00:20:14.530 "trsvcid": "39866" 00:20:14.530 }, 00:20:14.530 "auth": { 00:20:14.530 "state": "completed", 00:20:14.530 "digest": "sha512", 00:20:14.530 "dhgroup": "null" 00:20:14.530 } 00:20:14.530 } 00:20:14.530 ]' 00:20:14.530 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.530 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.530 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.530 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:14.530 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.788 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.788 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.788 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.045 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:20:15.978 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.978 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.978 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.978 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.978 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.978 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.978 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:15.978 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:16.236 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:16.236 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.236 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:16.236 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:16.236 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:16.236 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.236 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:16.236 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.236 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.236 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.236 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.236 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.494 00:20:16.494 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.494 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.494 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.752 { 00:20:16.752 "cntlid": 103, 00:20:16.752 "qid": 0, 00:20:16.752 "state": "enabled", 00:20:16.752 "thread": "nvmf_tgt_poll_group_000", 00:20:16.752 "listen_address": { 00:20:16.752 "trtype": "TCP", 00:20:16.752 "adrfam": "IPv4", 00:20:16.752 "traddr": "10.0.0.2", 00:20:16.752 "trsvcid": "4420" 00:20:16.752 }, 00:20:16.752 "peer_address": { 00:20:16.752 "trtype": "TCP", 00:20:16.752 "adrfam": "IPv4", 00:20:16.752 "traddr": "10.0.0.1", 00:20:16.752 "trsvcid": "39896" 00:20:16.752 }, 00:20:16.752 "auth": { 00:20:16.752 "state": "completed", 00:20:16.752 "digest": "sha512", 00:20:16.752 "dhgroup": "null" 00:20:16.752 } 00:20:16.752 } 00:20:16.752 ]' 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.752 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.010 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:20:17.941 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.941 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.941 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.941 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.941 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.941 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.941 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.941 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:17.941 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:18.199 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:18.199 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.199 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:18.199 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:18.199 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:18.199 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.199 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.199 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.199 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.199 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.199 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.199 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.457 00:20:18.715 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.715 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.715 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.715 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.715 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.715 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.715 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.973 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.973 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.973 { 00:20:18.973 "cntlid": 105, 00:20:18.973 "qid": 0, 00:20:18.973 "state": "enabled", 00:20:18.973 "thread": "nvmf_tgt_poll_group_000", 00:20:18.973 "listen_address": { 00:20:18.973 "trtype": "TCP", 00:20:18.973 "adrfam": "IPv4", 00:20:18.973 "traddr": "10.0.0.2", 00:20:18.973 "trsvcid": "4420" 00:20:18.973 }, 00:20:18.973 "peer_address": { 00:20:18.973 "trtype": "TCP", 00:20:18.973 "adrfam": "IPv4", 00:20:18.973 "traddr": "10.0.0.1", 00:20:18.973 "trsvcid": "39932" 00:20:18.973 }, 00:20:18.973 "auth": { 00:20:18.973 "state": "completed", 00:20:18.973 "digest": "sha512", 00:20:18.973 "dhgroup": "ffdhe2048" 00:20:18.973 } 00:20:18.973 } 00:20:18.973 ]' 00:20:18.973 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.973 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.973 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.973 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.973 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.973 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.973 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.973 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.231 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:20:20.165 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.165 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.165 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.165 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.165 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.165 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.165 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:20.165 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:20.423 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:20.423 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.423 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:20.423 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:20.423 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:20.423 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.423 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.423 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.423 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.423 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.423 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.423 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.681 00:20:20.681 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.682 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.682 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.939 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.939 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.939 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.939 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.939 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.939 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.939 { 00:20:20.939 "cntlid": 107, 00:20:20.939 "qid": 0, 00:20:20.939 "state": "enabled", 00:20:20.939 "thread": "nvmf_tgt_poll_group_000", 00:20:20.939 "listen_address": { 00:20:20.939 "trtype": "TCP", 00:20:20.939 "adrfam": "IPv4", 00:20:20.939 "traddr": "10.0.0.2", 00:20:20.939 "trsvcid": "4420" 00:20:20.939 }, 00:20:20.939 "peer_address": { 00:20:20.939 "trtype": "TCP", 00:20:20.939 "adrfam": "IPv4", 00:20:20.939 "traddr": "10.0.0.1", 00:20:20.939 "trsvcid": "39954" 00:20:20.939 }, 00:20:20.939 "auth": { 00:20:20.939 "state": "completed", 00:20:20.939 "digest": "sha512", 00:20:20.939 "dhgroup": "ffdhe2048" 00:20:20.939 } 00:20:20.939 } 00:20:20.939 ]' 00:20:20.939 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.196 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.196 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.196 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.196 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.196 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.196 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.196 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.453 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:20:22.384 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.384 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.384 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.384 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.384 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.384 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.385 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.385 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.643 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:22.643 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.643 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:22.643 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:22.643 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:22.643 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.643 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.643 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.643 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.643 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.643 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.643 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.900 00:20:22.900 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.900 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.900 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.159 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.159 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.159 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.159 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.159 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.159 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.159 { 00:20:23.159 "cntlid": 109, 00:20:23.159 "qid": 0, 00:20:23.159 "state": "enabled", 00:20:23.159 "thread": "nvmf_tgt_poll_group_000", 00:20:23.159 "listen_address": { 00:20:23.159 "trtype": "TCP", 00:20:23.159 "adrfam": "IPv4", 00:20:23.159 "traddr": "10.0.0.2", 00:20:23.159 "trsvcid": "4420" 00:20:23.159 }, 00:20:23.159 "peer_address": { 00:20:23.159 "trtype": "TCP", 00:20:23.159 "adrfam": "IPv4", 00:20:23.159 "traddr": "10.0.0.1", 00:20:23.159 "trsvcid": "39970" 00:20:23.159 }, 00:20:23.159 "auth": { 00:20:23.159 "state": "completed", 00:20:23.159 "digest": "sha512", 00:20:23.159 "dhgroup": "ffdhe2048" 00:20:23.159 } 00:20:23.159 } 00:20:23.159 ]' 00:20:23.159 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.159 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.159 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.417 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.417 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.417 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.417 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.417 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.675 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:20:24.607 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.607 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.607 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.607 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.607 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.607 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.607 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:24.607 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:24.865 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:24.865 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.865 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:24.865 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:24.865 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:24.865 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.865 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:24.865 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.865 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.865 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.865 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.865 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.121 00:20:25.121 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.121 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.121 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.379 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.379 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.379 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.379 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.379 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.379 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.379 { 00:20:25.379 "cntlid": 111, 00:20:25.379 "qid": 0, 00:20:25.379 "state": "enabled", 00:20:25.379 "thread": "nvmf_tgt_poll_group_000", 00:20:25.379 "listen_address": { 00:20:25.379 "trtype": "TCP", 00:20:25.379 "adrfam": "IPv4", 00:20:25.379 "traddr": "10.0.0.2", 00:20:25.379 "trsvcid": "4420" 00:20:25.379 }, 00:20:25.379 "peer_address": { 00:20:25.379 "trtype": "TCP", 00:20:25.379 "adrfam": "IPv4", 00:20:25.379 "traddr": "10.0.0.1", 00:20:25.379 "trsvcid": "42568" 00:20:25.379 }, 00:20:25.379 "auth": { 00:20:25.379 "state": "completed", 00:20:25.379 "digest": "sha512", 00:20:25.379 "dhgroup": "ffdhe2048" 00:20:25.379 } 00:20:25.379 } 00:20:25.379 ]' 00:20:25.379 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.379 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.379 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.636 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:25.636 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.636 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.636 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.636 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.903 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:20:26.838 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.838 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.838 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.838 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.838 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.838 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.838 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.838 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:26.838 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:27.096 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:27.096 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.096 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:27.096 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:27.096 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.096 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.096 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.096 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.096 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.096 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.096 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.096 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.353 00:20:27.353 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.353 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.353 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.610 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.610 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.610 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.610 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.611 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.611 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.611 { 00:20:27.611 "cntlid": 113, 00:20:27.611 "qid": 0, 00:20:27.611 "state": "enabled", 00:20:27.611 "thread": "nvmf_tgt_poll_group_000", 00:20:27.611 "listen_address": { 00:20:27.611 "trtype": "TCP", 00:20:27.611 "adrfam": "IPv4", 00:20:27.611 "traddr": "10.0.0.2", 00:20:27.611 "trsvcid": "4420" 00:20:27.611 }, 00:20:27.611 "peer_address": { 00:20:27.611 "trtype": "TCP", 00:20:27.611 "adrfam": "IPv4", 00:20:27.611 "traddr": "10.0.0.1", 00:20:27.611 "trsvcid": "42602" 00:20:27.611 }, 00:20:27.611 "auth": { 00:20:27.611 "state": "completed", 00:20:27.611 "digest": "sha512", 00:20:27.611 "dhgroup": "ffdhe3072" 00:20:27.611 } 00:20:27.611 } 00:20:27.611 ]' 00:20:27.611 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.611 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.611 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.867 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.867 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.867 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.867 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.867 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.123 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:20:29.054 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.054 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.054 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.054 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.054 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.054 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.054 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:29.054 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:29.311 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:29.311 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.311 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:29.311 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:29.311 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.311 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.311 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.311 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.311 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.311 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.311 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.311 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.569 00:20:29.569 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.569 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.569 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.827 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.827 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.827 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.827 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.827 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.827 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.827 { 00:20:29.827 "cntlid": 115, 00:20:29.827 "qid": 0, 00:20:29.827 "state": "enabled", 00:20:29.827 "thread": "nvmf_tgt_poll_group_000", 00:20:29.827 "listen_address": { 00:20:29.827 "trtype": "TCP", 00:20:29.827 "adrfam": "IPv4", 00:20:29.827 "traddr": "10.0.0.2", 00:20:29.827 "trsvcid": "4420" 00:20:29.827 }, 00:20:29.827 "peer_address": { 00:20:29.827 "trtype": "TCP", 00:20:29.827 "adrfam": "IPv4", 00:20:29.827 "traddr": "10.0.0.1", 00:20:29.827 "trsvcid": "42636" 00:20:29.827 }, 00:20:29.827 "auth": { 00:20:29.827 "state": "completed", 00:20:29.827 "digest": "sha512", 00:20:29.827 "dhgroup": "ffdhe3072" 00:20:29.827 } 00:20:29.827 } 00:20:29.827 ]' 00:20:29.827 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.827 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.827 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.827 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.827 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.085 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.085 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.085 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.343 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:20:31.278 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.278 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.278 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.278 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.278 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.278 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.278 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:31.278 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:31.536 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:31.536 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.536 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:31.536 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:31.536 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:31.536 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.536 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.536 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.536 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.536 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.536 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.536 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.794 00:20:31.794 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.794 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.794 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.051 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.052 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.052 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.052 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.052 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.052 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.052 { 00:20:32.052 "cntlid": 117, 00:20:32.052 "qid": 0, 00:20:32.052 "state": "enabled", 00:20:32.052 "thread": "nvmf_tgt_poll_group_000", 00:20:32.052 "listen_address": { 00:20:32.052 "trtype": "TCP", 00:20:32.052 "adrfam": "IPv4", 00:20:32.052 "traddr": "10.0.0.2", 00:20:32.052 "trsvcid": "4420" 00:20:32.052 }, 00:20:32.052 "peer_address": { 00:20:32.052 "trtype": "TCP", 00:20:32.052 "adrfam": "IPv4", 00:20:32.052 "traddr": "10.0.0.1", 00:20:32.052 "trsvcid": "42678" 00:20:32.052 }, 00:20:32.052 "auth": { 00:20:32.052 "state": "completed", 00:20:32.052 "digest": "sha512", 00:20:32.052 "dhgroup": "ffdhe3072" 00:20:32.052 } 00:20:32.052 } 00:20:32.052 ]' 00:20:32.052 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.052 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.052 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.052 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.052 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.052 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.052 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.052 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.310 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.684 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.941 00:20:34.198 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.198 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.198 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.455 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.455 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.455 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.455 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.455 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.455 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.455 { 00:20:34.455 "cntlid": 119, 00:20:34.455 "qid": 0, 00:20:34.455 "state": "enabled", 00:20:34.455 "thread": "nvmf_tgt_poll_group_000", 00:20:34.455 "listen_address": { 00:20:34.455 "trtype": "TCP", 00:20:34.455 "adrfam": "IPv4", 00:20:34.455 "traddr": "10.0.0.2", 00:20:34.455 "trsvcid": "4420" 00:20:34.455 }, 00:20:34.455 "peer_address": { 00:20:34.456 "trtype": "TCP", 00:20:34.456 "adrfam": "IPv4", 00:20:34.456 "traddr": "10.0.0.1", 00:20:34.456 "trsvcid": "43924" 00:20:34.456 }, 00:20:34.456 "auth": { 00:20:34.456 "state": "completed", 00:20:34.456 "digest": "sha512", 00:20:34.456 "dhgroup": "ffdhe3072" 00:20:34.456 } 00:20:34.456 } 00:20:34.456 ]' 00:20:34.456 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.456 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.456 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.456 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.456 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.456 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.456 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.456 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.712 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:20:35.642 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.643 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.643 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.643 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.643 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.643 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.643 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.643 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:35.643 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:35.900 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:35.900 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.900 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.900 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:35.900 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.900 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.900 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.900 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.900 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.900 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.900 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.900 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.464 00:20:36.464 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.464 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.464 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.722 { 00:20:36.722 "cntlid": 121, 00:20:36.722 "qid": 0, 00:20:36.722 "state": "enabled", 00:20:36.722 "thread": "nvmf_tgt_poll_group_000", 00:20:36.722 "listen_address": { 00:20:36.722 "trtype": "TCP", 00:20:36.722 "adrfam": "IPv4", 00:20:36.722 "traddr": "10.0.0.2", 00:20:36.722 "trsvcid": "4420" 00:20:36.722 }, 00:20:36.722 "peer_address": { 00:20:36.722 "trtype": "TCP", 00:20:36.722 "adrfam": "IPv4", 00:20:36.722 "traddr": "10.0.0.1", 00:20:36.722 "trsvcid": "43960" 00:20:36.722 }, 00:20:36.722 "auth": { 00:20:36.722 "state": "completed", 00:20:36.722 "digest": "sha512", 00:20:36.722 "dhgroup": "ffdhe4096" 00:20:36.722 } 00:20:36.722 } 00:20:36.722 ]' 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.722 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.979 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:20:37.911 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.912 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.912 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.912 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.912 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.912 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.912 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:37.912 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.476 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:38.476 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.476 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.476 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:38.476 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.476 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.476 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.476 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.476 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.476 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.476 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.476 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.733 00:20:38.733 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.733 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.733 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.991 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.991 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.992 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.992 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.992 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.992 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.992 { 00:20:38.992 "cntlid": 123, 00:20:38.992 "qid": 0, 00:20:38.992 "state": "enabled", 00:20:38.992 "thread": "nvmf_tgt_poll_group_000", 00:20:38.992 "listen_address": { 00:20:38.992 "trtype": "TCP", 00:20:38.992 "adrfam": "IPv4", 00:20:38.992 "traddr": "10.0.0.2", 00:20:38.992 "trsvcid": "4420" 00:20:38.992 }, 00:20:38.992 "peer_address": { 00:20:38.992 "trtype": "TCP", 00:20:38.992 "adrfam": "IPv4", 00:20:38.992 "traddr": "10.0.0.1", 00:20:38.992 "trsvcid": "43986" 00:20:38.992 }, 00:20:38.992 "auth": { 00:20:38.992 "state": "completed", 00:20:38.992 "digest": "sha512", 00:20:38.992 "dhgroup": "ffdhe4096" 00:20:38.992 } 00:20:38.992 } 00:20:38.992 ]' 00:20:38.992 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.992 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.992 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.992 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.992 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.992 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.992 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.992 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.250 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.622 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.880 00:20:40.880 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.880 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.880 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.138 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.138 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.138 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.138 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.406 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.406 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.406 { 00:20:41.406 "cntlid": 125, 00:20:41.406 "qid": 0, 00:20:41.406 "state": "enabled", 00:20:41.406 "thread": "nvmf_tgt_poll_group_000", 00:20:41.406 "listen_address": { 00:20:41.406 "trtype": "TCP", 00:20:41.406 "adrfam": "IPv4", 00:20:41.406 "traddr": "10.0.0.2", 00:20:41.406 "trsvcid": "4420" 00:20:41.406 }, 00:20:41.406 "peer_address": { 00:20:41.406 "trtype": "TCP", 00:20:41.406 "adrfam": "IPv4", 00:20:41.406 "traddr": "10.0.0.1", 00:20:41.406 "trsvcid": "44004" 00:20:41.406 }, 00:20:41.406 "auth": { 00:20:41.406 "state": "completed", 00:20:41.406 "digest": "sha512", 00:20:41.406 "dhgroup": "ffdhe4096" 00:20:41.406 } 00:20:41.406 } 00:20:41.406 ]' 00:20:41.406 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.406 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.406 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.406 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:41.406 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.406 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.406 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.406 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.711 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:20:42.644 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.644 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.644 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.644 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.644 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.644 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.644 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:42.644 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:42.903 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:42.903 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.903 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.903 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:42.903 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:42.903 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.903 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:42.903 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.903 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.903 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.903 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.903 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.161 00:20:43.161 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.161 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.161 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.418 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.418 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.418 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.418 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.418 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.418 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.418 { 00:20:43.418 "cntlid": 127, 00:20:43.419 "qid": 0, 00:20:43.419 "state": "enabled", 00:20:43.419 "thread": "nvmf_tgt_poll_group_000", 00:20:43.419 "listen_address": { 00:20:43.419 "trtype": "TCP", 00:20:43.419 "adrfam": "IPv4", 00:20:43.419 "traddr": "10.0.0.2", 00:20:43.419 "trsvcid": "4420" 00:20:43.419 }, 00:20:43.419 "peer_address": { 00:20:43.419 "trtype": "TCP", 00:20:43.419 "adrfam": "IPv4", 00:20:43.419 "traddr": "10.0.0.1", 00:20:43.419 "trsvcid": "44028" 00:20:43.419 }, 00:20:43.419 "auth": { 00:20:43.419 "state": "completed", 00:20:43.419 "digest": "sha512", 00:20:43.419 "dhgroup": "ffdhe4096" 00:20:43.419 } 00:20:43.419 } 00:20:43.419 ]' 00:20:43.419 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.676 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.676 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.676 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:43.676 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.676 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.676 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.676 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.935 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:20:44.868 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.868 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.868 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.868 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.868 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.868 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.868 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.869 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:44.869 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:45.127 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:45.127 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.127 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:45.127 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:45.127 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:45.127 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.127 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.127 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.127 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.127 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.127 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.127 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.692 00:20:45.692 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.692 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.693 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.951 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.951 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.951 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.951 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.951 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.951 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.951 { 00:20:45.951 "cntlid": 129, 00:20:45.951 "qid": 0, 00:20:45.951 "state": "enabled", 00:20:45.951 "thread": "nvmf_tgt_poll_group_000", 00:20:45.951 "listen_address": { 00:20:45.951 "trtype": "TCP", 00:20:45.951 "adrfam": "IPv4", 00:20:45.951 "traddr": "10.0.0.2", 00:20:45.951 "trsvcid": "4420" 00:20:45.951 }, 00:20:45.951 "peer_address": { 00:20:45.951 "trtype": "TCP", 00:20:45.951 "adrfam": "IPv4", 00:20:45.951 "traddr": "10.0.0.1", 00:20:45.951 "trsvcid": "59760" 00:20:45.951 }, 00:20:45.951 "auth": { 00:20:45.951 "state": "completed", 00:20:45.951 "digest": "sha512", 00:20:45.951 "dhgroup": "ffdhe6144" 00:20:45.951 } 00:20:45.951 } 00:20:45.951 ]' 00:20:45.951 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.951 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.951 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.951 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.951 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.951 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.951 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.951 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.209 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:20:47.143 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.143 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.143 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.143 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.143 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.143 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.143 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:47.143 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:47.400 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:47.400 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.400 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:47.400 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:47.400 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:47.400 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.400 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.400 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.400 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.400 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.400 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.400 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.964 00:20:47.964 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.964 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.964 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.221 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.221 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.221 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.221 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.221 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.221 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.221 { 00:20:48.221 "cntlid": 131, 00:20:48.221 "qid": 0, 00:20:48.221 "state": "enabled", 00:20:48.221 "thread": "nvmf_tgt_poll_group_000", 00:20:48.221 "listen_address": { 00:20:48.221 "trtype": "TCP", 00:20:48.221 "adrfam": "IPv4", 00:20:48.221 "traddr": "10.0.0.2", 00:20:48.221 "trsvcid": "4420" 00:20:48.221 }, 00:20:48.221 "peer_address": { 00:20:48.221 "trtype": "TCP", 00:20:48.221 "adrfam": "IPv4", 00:20:48.221 "traddr": "10.0.0.1", 00:20:48.221 "trsvcid": "59790" 00:20:48.221 }, 00:20:48.221 "auth": { 00:20:48.221 "state": "completed", 00:20:48.221 "digest": "sha512", 00:20:48.221 "dhgroup": "ffdhe6144" 00:20:48.221 } 00:20:48.221 } 00:20:48.221 ]' 00:20:48.221 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.477 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.477 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.477 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:48.477 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.477 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.477 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.477 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.734 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:20:49.667 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.667 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.667 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.667 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.667 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.667 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.667 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:49.667 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:49.925 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:49.925 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.925 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.925 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:49.925 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.925 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.925 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.926 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.926 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.926 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.926 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.926 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.492 00:20:50.492 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.492 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.492 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.750 { 00:20:50.750 "cntlid": 133, 00:20:50.750 "qid": 0, 00:20:50.750 "state": "enabled", 00:20:50.750 "thread": "nvmf_tgt_poll_group_000", 00:20:50.750 "listen_address": { 00:20:50.750 "trtype": "TCP", 00:20:50.750 "adrfam": "IPv4", 00:20:50.750 "traddr": "10.0.0.2", 00:20:50.750 "trsvcid": "4420" 00:20:50.750 }, 00:20:50.750 "peer_address": { 00:20:50.750 "trtype": "TCP", 00:20:50.750 "adrfam": "IPv4", 00:20:50.750 "traddr": "10.0.0.1", 00:20:50.750 "trsvcid": "59818" 00:20:50.750 }, 00:20:50.750 "auth": { 00:20:50.750 "state": "completed", 00:20:50.750 "digest": "sha512", 00:20:50.750 "dhgroup": "ffdhe6144" 00:20:50.750 } 00:20:50.750 } 00:20:50.750 ]' 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.750 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.008 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.382 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.947 00:20:52.947 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.947 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.947 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.206 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.206 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.206 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.206 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.206 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.206 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.206 { 00:20:53.206 "cntlid": 135, 00:20:53.206 "qid": 0, 00:20:53.206 "state": "enabled", 00:20:53.206 "thread": "nvmf_tgt_poll_group_000", 00:20:53.206 "listen_address": { 00:20:53.206 "trtype": "TCP", 00:20:53.206 "adrfam": "IPv4", 00:20:53.206 "traddr": "10.0.0.2", 00:20:53.206 "trsvcid": "4420" 00:20:53.206 }, 00:20:53.206 "peer_address": { 00:20:53.206 "trtype": "TCP", 00:20:53.206 "adrfam": "IPv4", 00:20:53.206 "traddr": "10.0.0.1", 00:20:53.206 "trsvcid": "59858" 00:20:53.206 }, 00:20:53.206 "auth": { 00:20:53.206 "state": "completed", 00:20:53.206 "digest": "sha512", 00:20:53.206 "dhgroup": "ffdhe6144" 00:20:53.206 } 00:20:53.206 } 00:20:53.206 ]' 00:20:53.206 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.206 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.206 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.206 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:53.206 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.463 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.463 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.463 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.464 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:20:54.398 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.398 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.398 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.398 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.398 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.398 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.398 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.398 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:54.398 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:54.655 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:54.655 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.655 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:54.655 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:54.655 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:54.655 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.655 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.655 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.655 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.655 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.655 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.655 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.588 00:20:55.588 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.588 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.588 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.847 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.847 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.847 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.847 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.847 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.847 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.847 { 00:20:55.847 "cntlid": 137, 00:20:55.847 "qid": 0, 00:20:55.847 "state": "enabled", 00:20:55.847 "thread": "nvmf_tgt_poll_group_000", 00:20:55.847 "listen_address": { 00:20:55.847 "trtype": "TCP", 00:20:55.847 "adrfam": "IPv4", 00:20:55.847 "traddr": "10.0.0.2", 00:20:55.847 "trsvcid": "4420" 00:20:55.847 }, 00:20:55.847 "peer_address": { 00:20:55.847 "trtype": "TCP", 00:20:55.847 "adrfam": "IPv4", 00:20:55.847 "traddr": "10.0.0.1", 00:20:55.847 "trsvcid": "60750" 00:20:55.847 }, 00:20:55.847 "auth": { 00:20:55.847 "state": "completed", 00:20:55.847 "digest": "sha512", 00:20:55.847 "dhgroup": "ffdhe8192" 00:20:55.847 } 00:20:55.847 } 00:20:55.847 ]' 00:20:55.847 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.104 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.104 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.104 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.104 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.104 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.104 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.104 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.362 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:20:57.332 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.332 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.332 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.332 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.332 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.332 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.332 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:57.332 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:57.588 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:57.588 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.588 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.588 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:57.588 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:57.588 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.588 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.588 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.588 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.588 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.588 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.588 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.522 00:20:58.522 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.522 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.522 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.780 { 00:20:58.780 "cntlid": 139, 00:20:58.780 "qid": 0, 00:20:58.780 "state": "enabled", 00:20:58.780 "thread": "nvmf_tgt_poll_group_000", 00:20:58.780 "listen_address": { 00:20:58.780 "trtype": "TCP", 00:20:58.780 "adrfam": "IPv4", 00:20:58.780 "traddr": "10.0.0.2", 00:20:58.780 "trsvcid": "4420" 00:20:58.780 }, 00:20:58.780 "peer_address": { 00:20:58.780 "trtype": "TCP", 00:20:58.780 "adrfam": "IPv4", 00:20:58.780 "traddr": "10.0.0.1", 00:20:58.780 "trsvcid": "60774" 00:20:58.780 }, 00:20:58.780 "auth": { 00:20:58.780 "state": "completed", 00:20:58.780 "digest": "sha512", 00:20:58.780 "dhgroup": "ffdhe8192" 00:20:58.780 } 00:20:58.780 } 00:20:58.780 ]' 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.780 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.038 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmZmNTM3NGM3MmNjOTAwNzI0YTYxZWRjNGMzYzhkOTSYP6Y9: --dhchap-ctrl-secret DHHC-1:02:NmViYWM2ZTgyZTA1YTZjZTA5NGJjNDA5ZTRlN2VlOWM4MDc3ZDJhNTIyOTQzMTY14m9URQ==: 00:20:59.974 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.974 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.974 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.974 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.974 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.974 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.974 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:59.974 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:00.233 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:00.233 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.233 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:00.233 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:00.233 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:00.233 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.233 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.233 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.233 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.233 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.233 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.233 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.169 00:21:01.169 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.169 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.169 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.428 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.428 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.428 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.428 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.428 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.428 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.428 { 00:21:01.428 "cntlid": 141, 00:21:01.428 "qid": 0, 00:21:01.428 "state": "enabled", 00:21:01.428 "thread": "nvmf_tgt_poll_group_000", 00:21:01.428 "listen_address": { 00:21:01.428 "trtype": "TCP", 00:21:01.428 "adrfam": "IPv4", 00:21:01.428 "traddr": "10.0.0.2", 00:21:01.428 "trsvcid": "4420" 00:21:01.428 }, 00:21:01.428 "peer_address": { 00:21:01.428 "trtype": "TCP", 00:21:01.428 "adrfam": "IPv4", 00:21:01.428 "traddr": "10.0.0.1", 00:21:01.428 "trsvcid": "60806" 00:21:01.428 }, 00:21:01.428 "auth": { 00:21:01.428 "state": "completed", 00:21:01.428 "digest": "sha512", 00:21:01.428 "dhgroup": "ffdhe8192" 00:21:01.428 } 00:21:01.428 } 00:21:01.428 ]' 00:21:01.428 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.685 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.685 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.686 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:01.686 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.686 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.686 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.686 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.943 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTcwMDljZTRlYjQ5ZmFhNTkyMjVjN2M4NWQ5YTkxNjNlNzI2YTBhNWYzY2I3MWYzJsyCnw==: --dhchap-ctrl-secret DHHC-1:01:NzY4ZjJjNDY2NjI3NzNiZGQxYzYxY2I0NmZmOWVjM2Obj6pE: 00:21:02.875 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.875 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.875 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.875 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.875 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.875 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.875 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:02.875 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:03.133 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:03.134 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.134 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.134 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:03.134 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:03.134 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.134 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:03.134 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.134 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.134 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.134 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:03.134 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:04.072 00:21:04.072 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.072 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.072 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.330 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.330 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.330 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.330 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.330 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.331 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.331 { 00:21:04.331 "cntlid": 143, 00:21:04.331 "qid": 0, 00:21:04.331 "state": "enabled", 00:21:04.331 "thread": "nvmf_tgt_poll_group_000", 00:21:04.331 "listen_address": { 00:21:04.331 "trtype": "TCP", 00:21:04.331 "adrfam": "IPv4", 00:21:04.331 "traddr": "10.0.0.2", 00:21:04.331 "trsvcid": "4420" 00:21:04.331 }, 00:21:04.331 "peer_address": { 00:21:04.331 "trtype": "TCP", 00:21:04.331 "adrfam": "IPv4", 00:21:04.331 "traddr": "10.0.0.1", 00:21:04.331 "trsvcid": "52102" 00:21:04.331 }, 00:21:04.331 "auth": { 00:21:04.331 "state": "completed", 00:21:04.331 "digest": "sha512", 00:21:04.331 "dhgroup": "ffdhe8192" 00:21:04.331 } 00:21:04.331 } 00:21:04.331 ]' 00:21:04.331 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.331 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.331 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.331 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.331 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.331 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.331 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.331 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.590 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:21:05.527 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.527 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.527 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.527 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.527 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.527 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:05.527 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:05.527 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:05.527 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:05.527 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:05.527 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:05.785 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:05.785 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.785 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:05.785 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:05.785 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:05.785 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.785 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.785 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.785 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.785 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.785 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.785 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.720 00:21:06.720 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.720 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.720 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.978 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.978 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.978 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.978 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.978 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.978 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.978 { 00:21:06.978 "cntlid": 145, 00:21:06.978 "qid": 0, 00:21:06.978 "state": "enabled", 00:21:06.978 "thread": "nvmf_tgt_poll_group_000", 00:21:06.978 "listen_address": { 00:21:06.978 "trtype": "TCP", 00:21:06.978 "adrfam": "IPv4", 00:21:06.978 "traddr": "10.0.0.2", 00:21:06.978 "trsvcid": "4420" 00:21:06.978 }, 00:21:06.978 "peer_address": { 00:21:06.978 "trtype": "TCP", 00:21:06.978 "adrfam": "IPv4", 00:21:06.978 "traddr": "10.0.0.1", 00:21:06.978 "trsvcid": "52120" 00:21:06.978 }, 00:21:06.978 "auth": { 00:21:06.978 "state": "completed", 00:21:06.978 "digest": "sha512", 00:21:06.978 "dhgroup": "ffdhe8192" 00:21:06.978 } 00:21:06.978 } 00:21:06.978 ]' 00:21:06.978 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.978 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.978 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.236 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.236 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.236 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.236 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.236 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.493 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJiZDQzMzQzMDUzODNlYWRhNWNjY2ViYmE4MjhlZDBhYzlkZDhiMTUyODA3OTZmvRZSdw==: --dhchap-ctrl-secret DHHC-1:03:ZTM5YjMzMjVkNzQ4ODk1NjE1NzEyZDRmZWEyYTY0NjVkNmVmNTE5MTBlYmI1Y2Q2Y2Y4MmE5ODhiZGViZGVlOX8V3ps=: 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:08.428 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:09.366 request: 00:21:09.366 { 00:21:09.366 "name": "nvme0", 00:21:09.366 "trtype": "tcp", 00:21:09.366 "traddr": "10.0.0.2", 00:21:09.366 "adrfam": "ipv4", 00:21:09.366 "trsvcid": "4420", 00:21:09.366 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:09.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.366 "prchk_reftag": false, 00:21:09.366 "prchk_guard": false, 00:21:09.366 "hdgst": false, 00:21:09.366 "ddgst": false, 00:21:09.366 "dhchap_key": "key2", 00:21:09.366 "method": "bdev_nvme_attach_controller", 00:21:09.366 "req_id": 1 00:21:09.366 } 00:21:09.366 Got JSON-RPC error response 00:21:09.366 response: 00:21:09.366 { 00:21:09.366 "code": -5, 00:21:09.366 "message": "Input/output error" 00:21:09.366 } 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:09.366 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:10.304 request: 00:21:10.304 { 00:21:10.304 "name": "nvme0", 00:21:10.304 "trtype": "tcp", 00:21:10.304 "traddr": "10.0.0.2", 00:21:10.304 "adrfam": "ipv4", 00:21:10.304 "trsvcid": "4420", 00:21:10.304 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:10.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:10.304 "prchk_reftag": false, 00:21:10.304 "prchk_guard": false, 00:21:10.304 "hdgst": false, 00:21:10.304 "ddgst": false, 00:21:10.304 "dhchap_key": "key1", 00:21:10.304 "dhchap_ctrlr_key": "ckey2", 00:21:10.304 "method": "bdev_nvme_attach_controller", 00:21:10.304 "req_id": 1 00:21:10.304 } 00:21:10.304 Got JSON-RPC error response 00:21:10.304 response: 00:21:10.304 { 00:21:10.304 "code": -5, 00:21:10.304 "message": "Input/output error" 00:21:10.304 } 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.304 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:10.305 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.305 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:10.305 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:10.305 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:10.305 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:10.305 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.305 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.873 request: 00:21:10.873 { 00:21:10.873 "name": "nvme0", 00:21:10.873 "trtype": "tcp", 00:21:10.873 "traddr": "10.0.0.2", 00:21:10.873 "adrfam": "ipv4", 00:21:10.873 "trsvcid": "4420", 00:21:10.873 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:10.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:10.873 "prchk_reftag": false, 00:21:10.873 "prchk_guard": false, 00:21:10.873 "hdgst": false, 00:21:10.873 "ddgst": false, 00:21:10.873 "dhchap_key": "key1", 00:21:10.873 "dhchap_ctrlr_key": "ckey1", 00:21:10.873 "method": "bdev_nvme_attach_controller", 00:21:10.873 "req_id": 1 00:21:10.873 } 00:21:10.873 Got JSON-RPC error response 00:21:10.873 response: 00:21:10.873 { 00:21:10.873 "code": -5, 00:21:10.873 "message": "Input/output error" 00:21:10.873 } 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1470811 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1470811 ']' 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1470811 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1470811 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1470811' 00:21:11.132 killing process with pid 1470811 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1470811 00:21:11.132 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1470811 00:21:11.393 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:11.393 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.393 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:11.393 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.393 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1493340 00:21:11.393 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:11.393 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1493340 00:21:11.393 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1493340 ']' 00:21:11.393 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.393 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.393 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.393 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.393 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1493340 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1493340 ']' 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.652 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.910 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.884 00:21:12.884 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.884 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.884 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.142 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.142 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.142 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.142 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.142 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.142 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.142 { 00:21:13.142 "cntlid": 1, 00:21:13.142 "qid": 0, 00:21:13.142 "state": "enabled", 00:21:13.142 "thread": "nvmf_tgt_poll_group_000", 00:21:13.142 "listen_address": { 00:21:13.142 "trtype": "TCP", 00:21:13.142 "adrfam": "IPv4", 00:21:13.142 "traddr": "10.0.0.2", 00:21:13.142 "trsvcid": "4420" 00:21:13.142 }, 00:21:13.142 "peer_address": { 00:21:13.142 "trtype": "TCP", 00:21:13.142 "adrfam": "IPv4", 00:21:13.142 "traddr": "10.0.0.1", 00:21:13.142 "trsvcid": "52158" 00:21:13.142 }, 00:21:13.142 "auth": { 00:21:13.142 "state": "completed", 00:21:13.142 "digest": "sha512", 00:21:13.142 "dhgroup": "ffdhe8192" 00:21:13.142 } 00:21:13.142 } 00:21:13.142 ]' 00:21:13.142 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.142 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.142 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.142 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:13.142 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.401 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.401 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.402 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.662 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDM3ZTIwOTUyYTRmNzUxNmVmZjY1ZTJhMWYxZWE5YzBmOWJlMmNlN2U3YTU3ZDBiNDU2MTNjMjI2NjI2YmUxY4cWUho=: 00:21:14.599 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.599 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.599 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.599 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.599 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.599 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:14.599 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.599 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.599 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.599 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:14.599 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:14.857 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.857 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:14.857 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.857 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:14.857 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.857 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:14.857 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.857 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.857 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.116 request: 00:21:15.116 { 00:21:15.116 "name": "nvme0", 00:21:15.116 "trtype": "tcp", 00:21:15.116 "traddr": "10.0.0.2", 00:21:15.116 "adrfam": "ipv4", 00:21:15.116 "trsvcid": "4420", 00:21:15.116 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:15.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.116 "prchk_reftag": false, 00:21:15.116 "prchk_guard": false, 00:21:15.116 "hdgst": false, 00:21:15.116 "ddgst": false, 00:21:15.116 "dhchap_key": "key3", 00:21:15.116 "method": "bdev_nvme_attach_controller", 00:21:15.116 "req_id": 1 00:21:15.116 } 00:21:15.116 Got JSON-RPC error response 00:21:15.116 response: 00:21:15.116 { 00:21:15.116 "code": -5, 00:21:15.116 "message": "Input/output error" 00:21:15.116 } 00:21:15.116 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:15.116 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:15.116 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:15.116 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:15.116 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:15.116 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:15.116 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:15.116 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:15.373 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.373 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:15.373 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.373 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:15.373 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:15.373 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:15.373 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:15.373 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.373 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.631 request: 00:21:15.631 { 00:21:15.631 "name": "nvme0", 00:21:15.631 "trtype": "tcp", 00:21:15.631 "traddr": "10.0.0.2", 00:21:15.631 "adrfam": "ipv4", 00:21:15.631 "trsvcid": "4420", 00:21:15.631 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:15.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.631 "prchk_reftag": false, 00:21:15.631 "prchk_guard": false, 00:21:15.631 "hdgst": false, 00:21:15.631 "ddgst": false, 00:21:15.631 "dhchap_key": "key3", 00:21:15.631 "method": "bdev_nvme_attach_controller", 00:21:15.631 "req_id": 1 00:21:15.631 } 00:21:15.631 Got JSON-RPC error response 00:21:15.631 response: 00:21:15.631 { 00:21:15.631 "code": -5, 00:21:15.631 "message": "Input/output error" 00:21:15.631 } 00:21:15.631 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:15.631 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:15.631 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:15.631 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:15.631 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:15.631 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:15.631 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:15.631 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:15.631 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:15.631 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:15.889 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:16.147 request: 00:21:16.147 { 00:21:16.147 "name": "nvme0", 00:21:16.147 "trtype": "tcp", 00:21:16.147 "traddr": "10.0.0.2", 00:21:16.147 "adrfam": "ipv4", 00:21:16.147 "trsvcid": "4420", 00:21:16.147 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:16.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:16.147 "prchk_reftag": false, 00:21:16.147 "prchk_guard": false, 00:21:16.147 "hdgst": false, 00:21:16.147 "ddgst": false, 00:21:16.147 "dhchap_key": "key0", 00:21:16.147 "dhchap_ctrlr_key": "key1", 00:21:16.147 "method": "bdev_nvme_attach_controller", 00:21:16.147 "req_id": 1 00:21:16.147 } 00:21:16.147 Got JSON-RPC error response 00:21:16.147 response: 00:21:16.147 { 00:21:16.147 "code": -5, 00:21:16.147 "message": "Input/output error" 00:21:16.147 } 00:21:16.147 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:16.147 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:16.147 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:16.147 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:16.147 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:16.147 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:16.406 00:21:16.406 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:16.406 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.406 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:16.663 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.663 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.663 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.921 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:16.921 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:16.921 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1470864 00:21:16.921 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1470864 ']' 00:21:16.921 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1470864 00:21:16.921 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:16.921 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:16.921 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1470864 00:21:16.921 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:16.921 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:16.921 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1470864' 00:21:16.921 killing process with pid 1470864 00:21:16.921 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1470864 00:21:16.921 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1470864 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:17.490 rmmod nvme_tcp 00:21:17.490 rmmod nvme_fabrics 00:21:17.490 rmmod nvme_keyring 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1493340 ']' 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1493340 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1493340 ']' 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1493340 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1493340 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1493340' 00:21:17.490 killing process with pid 1493340 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1493340 00:21:17.490 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1493340 00:21:17.750 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:17.750 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:17.750 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:17.750 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:17.750 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:17.750 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.750 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.750 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.654 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:19.654 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.awf /tmp/spdk.key-sha256.0zr /tmp/spdk.key-sha384.HRW /tmp/spdk.key-sha512.X8B /tmp/spdk.key-sha512.UE1 /tmp/spdk.key-sha384.fW8 /tmp/spdk.key-sha256.2Zl '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:19.654 00:21:19.654 real 3m9.224s 00:21:19.654 user 7m20.065s 00:21:19.654 sys 0m25.095s 00:21:19.654 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:19.654 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.654 ************************************ 00:21:19.654 END TEST nvmf_auth_target 00:21:19.654 ************************************ 00:21:19.654 18:21:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:19.654 18:21:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:19.654 18:21:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:19.654 18:21:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:19.654 18:21:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:19.911 ************************************ 00:21:19.912 START TEST nvmf_bdevio_no_huge 00:21:19.912 ************************************ 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:19.912 * Looking for test storage... 00:21:19.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:19.912 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:21.812 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:21.813 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:21.813 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:21.813 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:21.813 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:21.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:21:21.813 00:21:21.813 --- 10.0.0.2 ping statistics --- 00:21:21.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.813 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:21:21.813 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:21:21.813 00:21:21.813 --- 10.0.0.1 ping statistics --- 00:21:21.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.813 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1495984 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1495984 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1495984 ']' 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:21.814 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:21.814 [2024-07-26 18:21:47.956164] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:21.814 [2024-07-26 18:21:47.956245] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:22.071 [2024-07-26 18:21:48.007893] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:22.071 [2024-07-26 18:21:48.027157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:22.071 [2024-07-26 18:21:48.110975] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.071 [2024-07-26 18:21:48.111030] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.071 [2024-07-26 18:21:48.111053] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.071 [2024-07-26 18:21:48.111088] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.071 [2024-07-26 18:21:48.111098] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.071 [2024-07-26 18:21:48.111155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:22.071 [2024-07-26 18:21:48.111217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:22.071 [2024-07-26 18:21:48.111284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:22.071 [2024-07-26 18:21:48.111291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:22.071 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:22.071 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:22.071 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:22.071 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:22.071 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:22.330 [2024-07-26 18:21:48.224475] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:22.330 Malloc0 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:22.330 [2024-07-26 18:21:48.262176] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.330 { 00:21:22.330 "params": { 00:21:22.330 "name": "Nvme$subsystem", 00:21:22.330 "trtype": "$TEST_TRANSPORT", 00:21:22.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.330 "adrfam": "ipv4", 00:21:22.330 "trsvcid": "$NVMF_PORT", 00:21:22.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.330 "hdgst": ${hdgst:-false}, 00:21:22.330 "ddgst": ${ddgst:-false} 00:21:22.330 }, 00:21:22.330 "method": "bdev_nvme_attach_controller" 00:21:22.330 } 00:21:22.330 EOF 00:21:22.330 )") 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:22.330 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:22.330 "params": { 00:21:22.330 "name": "Nvme1", 00:21:22.330 "trtype": "tcp", 00:21:22.330 "traddr": "10.0.0.2", 00:21:22.330 "adrfam": "ipv4", 00:21:22.330 "trsvcid": "4420", 00:21:22.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.330 "hdgst": false, 00:21:22.330 "ddgst": false 00:21:22.330 }, 00:21:22.330 "method": "bdev_nvme_attach_controller" 00:21:22.330 }' 00:21:22.330 [2024-07-26 18:21:48.306557] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:22.330 [2024-07-26 18:21:48.306647] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1496123 ] 00:21:22.330 [2024-07-26 18:21:48.349193] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:22.330 [2024-07-26 18:21:48.369276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:22.330 [2024-07-26 18:21:48.452055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.330 [2024-07-26 18:21:48.452105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.330 [2024-07-26 18:21:48.452109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.589 I/O targets: 00:21:22.589 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:22.589 00:21:22.589 00:21:22.589 CUnit - A unit testing framework for C - Version 2.1-3 00:21:22.589 http://cunit.sourceforge.net/ 00:21:22.589 00:21:22.589 00:21:22.589 Suite: bdevio tests on: Nvme1n1 00:21:22.589 Test: blockdev write read block ...passed 00:21:22.848 Test: blockdev write zeroes read block ...passed 00:21:22.848 Test: blockdev write zeroes read no split ...passed 00:21:22.848 Test: blockdev write zeroes read split ...passed 00:21:22.848 Test: blockdev write zeroes read split partial ...passed 00:21:22.848 Test: blockdev reset ...[2024-07-26 18:21:48.864504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.848 [2024-07-26 18:21:48.864619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1333330 (9): Bad file descriptor 00:21:23.106 [2024-07-26 18:21:49.007271] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:23.106 passed 00:21:23.106 Test: blockdev write read 8 blocks ...passed 00:21:23.106 Test: blockdev write read size > 128k ...passed 00:21:23.106 Test: blockdev write read invalid size ...passed 00:21:23.106 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:23.106 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:23.106 Test: blockdev write read max offset ...passed 00:21:23.106 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:23.106 Test: blockdev writev readv 8 blocks ...passed 00:21:23.106 Test: blockdev writev readv 30 x 1block ...passed 00:21:23.106 Test: blockdev writev readv block ...passed 00:21:23.106 Test: blockdev writev readv size > 128k ...passed 00:21:23.106 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:23.106 Test: blockdev comparev and writev ...[2024-07-26 18:21:49.184516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.106 [2024-07-26 18:21:49.184551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.106 [2024-07-26 18:21:49.184576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.106 [2024-07-26 18:21:49.184594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:23.106 [2024-07-26 18:21:49.184999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.106 [2024-07-26 18:21:49.185024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:23.106 [2024-07-26 18:21:49.185046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.107 [2024-07-26 18:21:49.185078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:23.107 [2024-07-26 18:21:49.185491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.107 [2024-07-26 18:21:49.185515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:23.107 [2024-07-26 18:21:49.185542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.107 [2024-07-26 18:21:49.185559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:23.107 [2024-07-26 18:21:49.185963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.107 [2024-07-26 18:21:49.185988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:23.107 [2024-07-26 18:21:49.186010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.107 [2024-07-26 18:21:49.186026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:23.107 passed 00:21:23.364 Test: blockdev nvme passthru rw ...passed 00:21:23.364 Test: blockdev nvme passthru vendor specific ...[2024-07-26 18:21:49.268445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:23.364 [2024-07-26 18:21:49.268474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:23.364 [2024-07-26 18:21:49.268662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:23.364 [2024-07-26 18:21:49.268686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:23.364 [2024-07-26 18:21:49.268872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:23.364 [2024-07-26 18:21:49.268895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:23.364 [2024-07-26 18:21:49.269086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:23.364 [2024-07-26 18:21:49.269111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:23.364 passed 00:21:23.364 Test: blockdev nvme admin passthru ...passed 00:21:23.364 Test: blockdev copy ...passed 00:21:23.364 00:21:23.364 Run Summary: Type Total Ran Passed Failed Inactive 00:21:23.364 suites 1 1 n/a 0 0 00:21:23.364 tests 23 23 23 0 0 00:21:23.364 asserts 152 152 152 0 n/a 00:21:23.364 00:21:23.364 Elapsed time = 1.356 seconds 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:23.622 rmmod nvme_tcp 00:21:23.622 rmmod nvme_fabrics 00:21:23.622 rmmod nvme_keyring 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1495984 ']' 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1495984 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1495984 ']' 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1495984 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1495984 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1495984' 00:21:23.622 killing process with pid 1495984 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1495984 00:21:23.622 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1495984 00:21:24.187 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:24.187 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:24.187 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:24.187 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:24.187 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:24.187 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.187 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.187 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.094 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:26.094 00:21:26.094 real 0m6.324s 00:21:26.094 user 0m10.585s 00:21:26.094 sys 0m2.402s 00:21:26.094 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:26.094 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:26.094 ************************************ 00:21:26.094 END TEST nvmf_bdevio_no_huge 00:21:26.094 ************************************ 00:21:26.094 18:21:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:26.094 18:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:26.094 18:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:26.094 18:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:26.094 ************************************ 00:21:26.094 START TEST nvmf_tls 00:21:26.094 ************************************ 00:21:26.094 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:26.352 * Looking for test storage... 00:21:26.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.352 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:26.353 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.258 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.258 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:28.259 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:28.259 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:28.259 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:28.259 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:28.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:21:28.259 00:21:28.259 --- 10.0.0.2 ping statistics --- 00:21:28.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.259 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:21:28.259 00:21:28.259 --- 10.0.0.1 ping statistics --- 00:21:28.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.259 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:28.259 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.260 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1498187 00:21:28.260 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:28.260 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1498187 00:21:28.260 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1498187 ']' 00:21:28.260 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.260 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:28.260 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.260 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:28.260 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.260 [2024-07-26 18:21:54.348328] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:28.260 [2024-07-26 18:21:54.348403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.260 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.260 [2024-07-26 18:21:54.387668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:28.518 [2024-07-26 18:21:54.416462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.518 [2024-07-26 18:21:54.502716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.518 [2024-07-26 18:21:54.502779] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.518 [2024-07-26 18:21:54.502792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.518 [2024-07-26 18:21:54.502803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.518 [2024-07-26 18:21:54.502812] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.518 [2024-07-26 18:21:54.502839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.518 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:28.518 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:28.518 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.518 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.518 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.518 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.518 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:28.518 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:28.776 true 00:21:28.776 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:28.776 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:29.062 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:29.062 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:29.062 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:29.320 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:29.320 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:29.577 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:29.577 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:29.577 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:29.836 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:29.836 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:30.095 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:30.095 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:30.095 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:30.095 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:30.355 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:30.355 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:30.355 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:30.614 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:30.614 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:30.872 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:30.872 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:30.872 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:31.130 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:31.130 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:31.390 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:31.651 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:31.651 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:31.651 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.BN0cGYW6tQ 00:21:31.651 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:31.651 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.cPoTWpt96E 00:21:31.651 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:31.651 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:31.651 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.BN0cGYW6tQ 00:21:31.651 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.cPoTWpt96E 00:21:31.651 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:31.911 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:32.170 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.BN0cGYW6tQ 00:21:32.170 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BN0cGYW6tQ 00:21:32.170 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:32.428 [2024-07-26 18:21:58.391267] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.428 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:32.686 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:32.944 [2024-07-26 18:21:58.892650] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:32.944 [2024-07-26 18:21:58.892886] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.944 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:33.203 malloc0 00:21:33.203 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:33.462 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BN0cGYW6tQ 00:21:33.722 [2024-07-26 18:21:59.642691] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:33.722 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.BN0cGYW6tQ 00:21:33.722 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.712 Initializing NVMe Controllers 00:21:43.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:43.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:43.712 Initialization complete. Launching workers. 00:21:43.712 ======================================================== 00:21:43.712 Latency(us) 00:21:43.712 Device Information : IOPS MiB/s Average min max 00:21:43.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7800.22 30.47 8207.55 1191.95 9286.08 00:21:43.712 ======================================================== 00:21:43.712 Total : 7800.22 30.47 8207.55 1191.95 9286.08 00:21:43.712 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BN0cGYW6tQ 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BN0cGYW6tQ' 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1500086 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1500086 /var/tmp/bdevperf.sock 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1500086 ']' 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.712 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.712 [2024-07-26 18:22:09.815425] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:43.712 [2024-07-26 18:22:09.815501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500086 ] 00:21:43.712 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.712 [2024-07-26 18:22:09.847313] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:43.970 [2024-07-26 18:22:09.874224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.971 [2024-07-26 18:22:09.957358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.971 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.971 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:43.971 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BN0cGYW6tQ 00:21:44.230 [2024-07-26 18:22:10.353037] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.230 [2024-07-26 18:22:10.353161] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:44.489 TLSTESTn1 00:21:44.489 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:44.489 Running I/O for 10 seconds... 00:21:56.703 00:21:56.703 Latency(us) 00:21:56.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.704 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:56.704 Verification LBA range: start 0x0 length 0x2000 00:21:56.704 TLSTESTn1 : 10.05 2178.47 8.51 0.00 0.00 58599.24 11699.39 90488.23 00:21:56.704 =================================================================================================================== 00:21:56.704 Total : 2178.47 8.51 0.00 0.00 58599.24 11699.39 90488.23 00:21:56.704 0 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1500086 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1500086 ']' 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1500086 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1500086 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1500086' 00:21:56.704 killing process with pid 1500086 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1500086 00:21:56.704 Received shutdown signal, test time was about 10.000000 seconds 00:21:56.704 00:21:56.704 Latency(us) 00:21:56.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.704 =================================================================================================================== 00:21:56.704 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.704 [2024-07-26 18:22:20.673204] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1500086 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cPoTWpt96E 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cPoTWpt96E 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cPoTWpt96E 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cPoTWpt96E' 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1501283 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1501283 /var/tmp/bdevperf.sock 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1501283 ']' 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.704 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.704 [2024-07-26 18:22:20.937529] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:56.704 [2024-07-26 18:22:20.937601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501283 ] 00:21:56.704 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.704 [2024-07-26 18:22:20.969686] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:56.704 [2024-07-26 18:22:20.995378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.704 [2024-07-26 18:22:21.079688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.704 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:56.704 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:56.704 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cPoTWpt96E 00:21:56.704 [2024-07-26 18:22:21.460324] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:56.704 [2024-07-26 18:22:21.460448] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:56.704 [2024-07-26 18:22:21.471929] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:56.704 [2024-07-26 18:22:21.472410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20468d0 (107): Transport endpoint is not connected 00:21:56.704 [2024-07-26 18:22:21.473399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20468d0 (9): Bad file descriptor 00:21:56.704 [2024-07-26 18:22:21.474398] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:56.704 [2024-07-26 18:22:21.474423] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:56.704 [2024-07-26 18:22:21.474439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:56.704 request: 00:21:56.704 { 00:21:56.704 "name": "TLSTEST", 00:21:56.704 "trtype": "tcp", 00:21:56.704 "traddr": "10.0.0.2", 00:21:56.704 "adrfam": "ipv4", 00:21:56.704 "trsvcid": "4420", 00:21:56.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:56.704 "prchk_reftag": false, 00:21:56.704 "prchk_guard": false, 00:21:56.704 "hdgst": false, 00:21:56.704 "ddgst": false, 00:21:56.704 "psk": "/tmp/tmp.cPoTWpt96E", 00:21:56.704 "method": "bdev_nvme_attach_controller", 00:21:56.704 "req_id": 1 00:21:56.704 } 00:21:56.704 Got JSON-RPC error response 00:21:56.704 response: 00:21:56.704 { 00:21:56.704 "code": -5, 00:21:56.704 "message": "Input/output error" 00:21:56.704 } 00:21:56.704 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1501283 00:21:56.704 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1501283 ']' 00:21:56.704 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1501283 00:21:56.704 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:56.704 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.704 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1501283 00:21:56.704 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:56.704 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:56.704 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1501283' 00:21:56.704 killing process with pid 1501283 00:21:56.704 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1501283 00:21:56.704 Received shutdown signal, test time was about 10.000000 seconds 00:21:56.704 00:21:56.704 Latency(us) 00:21:56.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.704 =================================================================================================================== 00:21:56.704 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:56.705 [2024-07-26 18:22:21.523760] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1501283 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BN0cGYW6tQ 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BN0cGYW6tQ 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BN0cGYW6tQ 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BN0cGYW6tQ' 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1501420 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1501420 /var/tmp/bdevperf.sock 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1501420 ']' 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.705 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.705 [2024-07-26 18:22:21.785589] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:56.705 [2024-07-26 18:22:21.785679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501420 ] 00:21:56.705 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.705 [2024-07-26 18:22:21.817924] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:56.705 [2024-07-26 18:22:21.845476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.705 [2024-07-26 18:22:21.930534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.BN0cGYW6tQ 00:21:56.705 [2024-07-26 18:22:22.284288] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:56.705 [2024-07-26 18:22:22.284431] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:56.705 [2024-07-26 18:22:22.293961] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:56.705 [2024-07-26 18:22:22.293995] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:56.705 [2024-07-26 18:22:22.294047] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:56.705 [2024-07-26 18:22:22.294520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aca8d0 (107): Transport endpoint is not connected 00:21:56.705 [2024-07-26 18:22:22.295510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aca8d0 (9): Bad file descriptor 00:21:56.705 [2024-07-26 18:22:22.296513] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:56.705 [2024-07-26 18:22:22.296535] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:56.705 [2024-07-26 18:22:22.296552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:56.705 request: 00:21:56.705 { 00:21:56.705 "name": "TLSTEST", 00:21:56.705 "trtype": "tcp", 00:21:56.705 "traddr": "10.0.0.2", 00:21:56.705 "adrfam": "ipv4", 00:21:56.705 "trsvcid": "4420", 00:21:56.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.705 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:56.705 "prchk_reftag": false, 00:21:56.705 "prchk_guard": false, 00:21:56.705 "hdgst": false, 00:21:56.705 "ddgst": false, 00:21:56.705 "psk": "/tmp/tmp.BN0cGYW6tQ", 00:21:56.705 "method": "bdev_nvme_attach_controller", 00:21:56.705 "req_id": 1 00:21:56.705 } 00:21:56.705 Got JSON-RPC error response 00:21:56.705 response: 00:21:56.705 { 00:21:56.705 "code": -5, 00:21:56.705 "message": "Input/output error" 00:21:56.705 } 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1501420 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1501420 ']' 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1501420 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1501420 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1501420' 00:21:56.705 killing process with pid 1501420 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1501420 00:21:56.705 Received shutdown signal, test time was about 10.000000 seconds 00:21:56.705 00:21:56.705 Latency(us) 00:21:56.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.705 =================================================================================================================== 00:21:56.705 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:56.705 [2024-07-26 18:22:22.348375] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1501420 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BN0cGYW6tQ 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BN0cGYW6tQ 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BN0cGYW6tQ 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BN0cGYW6tQ' 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1501552 00:21:56.705 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:56.706 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.706 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1501552 /var/tmp/bdevperf.sock 00:21:56.706 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1501552 ']' 00:21:56.706 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.706 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.706 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.706 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.706 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.706 [2024-07-26 18:22:22.617513] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:56.706 [2024-07-26 18:22:22.617603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501552 ] 00:21:56.706 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.706 [2024-07-26 18:22:22.649396] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:56.706 [2024-07-26 18:22:22.676929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.706 [2024-07-26 18:22:22.759754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.964 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:56.964 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:56.965 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BN0cGYW6tQ 00:21:57.224 [2024-07-26 18:22:23.142194] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.224 [2024-07-26 18:22:23.142313] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:57.224 [2024-07-26 18:22:23.147594] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:57.224 [2024-07-26 18:22:23.147630] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:57.224 [2024-07-26 18:22:23.147672] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:57.224 [2024-07-26 18:22:23.148213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd988d0 (107): Transport endpoint is not connected 00:21:57.224 [2024-07-26 18:22:23.149200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd988d0 (9): Bad file descriptor 00:21:57.224 [2024-07-26 18:22:23.150198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:57.224 [2024-07-26 18:22:23.150220] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:57.224 [2024-07-26 18:22:23.150238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:57.224 request: 00:21:57.224 { 00:21:57.224 "name": "TLSTEST", 00:21:57.224 "trtype": "tcp", 00:21:57.224 "traddr": "10.0.0.2", 00:21:57.224 "adrfam": "ipv4", 00:21:57.224 "trsvcid": "4420", 00:21:57.224 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:57.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.224 "prchk_reftag": false, 00:21:57.224 "prchk_guard": false, 00:21:57.224 "hdgst": false, 00:21:57.224 "ddgst": false, 00:21:57.224 "psk": "/tmp/tmp.BN0cGYW6tQ", 00:21:57.224 "method": "bdev_nvme_attach_controller", 00:21:57.224 "req_id": 1 00:21:57.224 } 00:21:57.224 Got JSON-RPC error response 00:21:57.224 response: 00:21:57.224 { 00:21:57.224 "code": -5, 00:21:57.224 "message": "Input/output error" 00:21:57.224 } 00:21:57.224 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1501552 00:21:57.224 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1501552 ']' 00:21:57.224 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1501552 00:21:57.224 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:57.224 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.224 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1501552 00:21:57.224 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:57.224 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:57.224 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1501552' 00:21:57.224 killing process with pid 1501552 00:21:57.224 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1501552 00:21:57.224 Received shutdown signal, test time was about 10.000000 seconds 00:21:57.224 00:21:57.224 Latency(us) 00:21:57.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.224 =================================================================================================================== 00:21:57.224 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:57.224 [2024-07-26 18:22:23.199777] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:57.224 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1501552 00:21:57.481 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:57.481 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1501692 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1501692 /var/tmp/bdevperf.sock 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1501692 ']' 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:57.482 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.482 [2024-07-26 18:22:23.448005] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:57.482 [2024-07-26 18:22:23.448115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501692 ] 00:21:57.482 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.482 [2024-07-26 18:22:23.479084] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:57.482 [2024-07-26 18:22:23.505779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.482 [2024-07-26 18:22:23.591159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:57.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:58.001 [2024-07-26 18:22:23.928402] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:58.001 [2024-07-26 18:22:23.929825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d3de0 (9): Bad file descriptor 00:21:58.001 [2024-07-26 18:22:23.930819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:58.001 [2024-07-26 18:22:23.930841] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:58.001 [2024-07-26 18:22:23.930858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:58.001 request: 00:21:58.001 { 00:21:58.001 "name": "TLSTEST", 00:21:58.001 "trtype": "tcp", 00:21:58.001 "traddr": "10.0.0.2", 00:21:58.001 "adrfam": "ipv4", 00:21:58.001 "trsvcid": "4420", 00:21:58.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.001 "prchk_reftag": false, 00:21:58.001 "prchk_guard": false, 00:21:58.001 "hdgst": false, 00:21:58.001 "ddgst": false, 00:21:58.001 "method": "bdev_nvme_attach_controller", 00:21:58.001 "req_id": 1 00:21:58.001 } 00:21:58.001 Got JSON-RPC error response 00:21:58.001 response: 00:21:58.001 { 00:21:58.001 "code": -5, 00:21:58.001 "message": "Input/output error" 00:21:58.001 } 00:21:58.001 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1501692 00:21:58.001 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1501692 ']' 00:21:58.001 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1501692 00:21:58.001 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:58.001 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.001 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1501692 00:21:58.001 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:58.001 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:58.001 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1501692' 00:21:58.001 killing process with pid 1501692 00:21:58.001 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1501692 00:21:58.001 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.001 00:21:58.001 Latency(us) 00:21:58.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.002 =================================================================================================================== 00:21:58.002 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:58.002 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1501692 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1498187 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1498187 ']' 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1498187 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1498187 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1498187' 00:21:58.298 killing process with pid 1498187 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1498187 00:21:58.298 [2024-07-26 18:22:24.233544] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:58.298 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1498187 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.dpGKG7c5nw 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.dpGKG7c5nw 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1501840 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1501840 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1501840 ']' 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:58.556 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.556 [2024-07-26 18:22:24.581551] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:58.556 [2024-07-26 18:22:24.581647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.556 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.556 [2024-07-26 18:22:24.618643] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:58.556 [2024-07-26 18:22:24.651019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.814 [2024-07-26 18:22:24.740365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.814 [2024-07-26 18:22:24.740431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.814 [2024-07-26 18:22:24.740458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.814 [2024-07-26 18:22:24.740473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.814 [2024-07-26 18:22:24.740486] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.814 [2024-07-26 18:22:24.740515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.814 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.814 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:58.814 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:58.814 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:58.814 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.814 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.814 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.dpGKG7c5nw 00:21:58.814 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dpGKG7c5nw 00:21:58.814 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:59.072 [2024-07-26 18:22:25.158150] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.072 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:59.331 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:59.589 [2024-07-26 18:22:25.711651] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:59.589 [2024-07-26 18:22:25.711880] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.589 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:59.847 malloc0 00:21:59.847 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:00.105 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dpGKG7c5nw 00:22:00.364 [2024-07-26 18:22:26.444792] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dpGKG7c5nw 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dpGKG7c5nw' 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1502019 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1502019 /var/tmp/bdevperf.sock 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1502019 ']' 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.364 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.622 [2024-07-26 18:22:26.511227] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:00.622 [2024-07-26 18:22:26.511308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502019 ] 00:22:00.622 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.622 [2024-07-26 18:22:26.542262] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:00.622 [2024-07-26 18:22:26.570326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.622 [2024-07-26 18:22:26.661921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.622 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:00.622 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:00.623 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dpGKG7c5nw 00:22:00.882 [2024-07-26 18:22:26.991995] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:00.882 [2024-07-26 18:22:26.992115] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:01.141 TLSTESTn1 00:22:01.141 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:01.141 Running I/O for 10 seconds... 00:22:11.126 00:22:11.126 Latency(us) 00:22:11.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.126 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:11.126 Verification LBA range: start 0x0 length 0x2000 00:22:11.126 TLSTESTn1 : 10.05 2410.60 9.42 0.00 0.00 52955.92 5898.24 86992.97 00:22:11.126 =================================================================================================================== 00:22:11.126 Total : 2410.60 9.42 0.00 0.00 52955.92 5898.24 86992.97 00:22:11.126 0 00:22:11.385 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.385 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1502019 00:22:11.385 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1502019 ']' 00:22:11.385 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1502019 00:22:11.385 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:11.385 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:11.385 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1502019 00:22:11.385 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:11.385 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:11.385 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1502019' 00:22:11.385 killing process with pid 1502019 00:22:11.385 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1502019 00:22:11.385 Received shutdown signal, test time was about 10.000000 seconds 00:22:11.385 00:22:11.385 Latency(us) 00:22:11.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.385 =================================================================================================================== 00:22:11.385 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.385 [2024-07-26 18:22:37.312326] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:11.385 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1502019 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.dpGKG7c5nw 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dpGKG7c5nw 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dpGKG7c5nw 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dpGKG7c5nw 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dpGKG7c5nw' 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1503337 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1503337 /var/tmp/bdevperf.sock 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1503337 ']' 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:11.645 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.645 [2024-07-26 18:22:37.585549] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:11.645 [2024-07-26 18:22:37.585638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503337 ] 00:22:11.645 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.645 [2024-07-26 18:22:37.616844] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:11.645 [2024-07-26 18:22:37.643521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.645 [2024-07-26 18:22:37.723536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.903 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:11.903 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:11.903 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dpGKG7c5nw 00:22:12.162 [2024-07-26 18:22:38.109394] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:12.162 [2024-07-26 18:22:38.109489] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:12.162 [2024-07-26 18:22:38.109508] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.dpGKG7c5nw 00:22:12.162 request: 00:22:12.162 { 00:22:12.162 "name": "TLSTEST", 00:22:12.162 "trtype": "tcp", 00:22:12.162 "traddr": "10.0.0.2", 00:22:12.162 "adrfam": "ipv4", 00:22:12.162 "trsvcid": "4420", 00:22:12.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.162 "prchk_reftag": false, 00:22:12.162 "prchk_guard": false, 00:22:12.162 "hdgst": false, 00:22:12.162 "ddgst": false, 00:22:12.162 "psk": "/tmp/tmp.dpGKG7c5nw", 00:22:12.162 "method": "bdev_nvme_attach_controller", 00:22:12.162 "req_id": 1 00:22:12.162 } 00:22:12.162 Got JSON-RPC error response 00:22:12.162 response: 00:22:12.162 { 00:22:12.162 "code": -1, 00:22:12.162 "message": "Operation not permitted" 00:22:12.162 } 00:22:12.162 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1503337 00:22:12.162 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1503337 ']' 00:22:12.162 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1503337 00:22:12.162 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:12.162 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:12.162 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503337 00:22:12.162 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:12.162 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:12.162 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503337' 00:22:12.162 killing process with pid 1503337 00:22:12.162 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1503337 00:22:12.162 Received shutdown signal, test time was about 10.000000 seconds 00:22:12.162 00:22:12.162 Latency(us) 00:22:12.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.162 =================================================================================================================== 00:22:12.162 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:12.162 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1503337 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1501840 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1501840 ']' 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1501840 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1501840 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1501840' 00:22:12.421 killing process with pid 1501840 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1501840 00:22:12.421 [2024-07-26 18:22:38.397307] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:12.421 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1501840 00:22:12.681 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:12.681 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:12.681 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:12.681 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.681 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1503483 00:22:12.681 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:12.681 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1503483 00:22:12.681 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1503483 ']' 00:22:12.681 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.681 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:12.681 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.681 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:12.681 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.681 [2024-07-26 18:22:38.694245] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:12.681 [2024-07-26 18:22:38.694327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.681 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.681 [2024-07-26 18:22:38.737022] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:12.681 [2024-07-26 18:22:38.769154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.940 [2024-07-26 18:22:38.864813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.940 [2024-07-26 18:22:38.864884] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.940 [2024-07-26 18:22:38.864900] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.940 [2024-07-26 18:22:38.864913] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.940 [2024-07-26 18:22:38.864925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.940 [2024-07-26 18:22:38.864956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.940 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:12.940 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:12.940 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:12.940 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:12.940 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.940 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.940 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.dpGKG7c5nw 00:22:12.940 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:12.940 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.dpGKG7c5nw 00:22:12.940 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:12.940 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:12.940 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:12.940 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:12.940 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.dpGKG7c5nw 00:22:12.940 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dpGKG7c5nw 00:22:12.940 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:13.198 [2024-07-26 18:22:39.288840] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.198 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:13.455 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:13.713 [2024-07-26 18:22:39.794184] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:13.713 [2024-07-26 18:22:39.794439] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.713 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:13.971 malloc0 00:22:13.971 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:14.228 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dpGKG7c5nw 00:22:14.487 [2024-07-26 18:22:40.555874] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:14.487 [2024-07-26 18:22:40.555918] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:14.487 [2024-07-26 18:22:40.555967] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:14.487 request: 00:22:14.487 { 00:22:14.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.487 "host": "nqn.2016-06.io.spdk:host1", 00:22:14.487 "psk": "/tmp/tmp.dpGKG7c5nw", 00:22:14.487 "method": "nvmf_subsystem_add_host", 00:22:14.487 "req_id": 1 00:22:14.487 } 00:22:14.487 Got JSON-RPC error response 00:22:14.487 response: 00:22:14.487 { 00:22:14.487 "code": -32603, 00:22:14.487 "message": "Internal error" 00:22:14.487 } 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1503483 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1503483 ']' 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1503483 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503483 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503483' 00:22:14.487 killing process with pid 1503483 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1503483 00:22:14.487 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1503483 00:22:14.745 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.dpGKG7c5nw 00:22:14.745 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:14.745 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:14.745 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:14.745 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.745 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1503776 00:22:14.745 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:14.745 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1503776 00:22:14.746 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1503776 ']' 00:22:14.746 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.746 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:14.746 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.746 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:14.746 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.005 [2024-07-26 18:22:40.920217] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:15.005 [2024-07-26 18:22:40.920298] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.005 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.005 [2024-07-26 18:22:40.957471] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:15.005 [2024-07-26 18:22:40.989743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.005 [2024-07-26 18:22:41.076903] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.005 [2024-07-26 18:22:41.076967] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.005 [2024-07-26 18:22:41.076992] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.005 [2024-07-26 18:22:41.077006] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.005 [2024-07-26 18:22:41.077020] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.005 [2024-07-26 18:22:41.077073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.263 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:15.263 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:15.263 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.263 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:15.263 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.263 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.263 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.dpGKG7c5nw 00:22:15.263 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dpGKG7c5nw 00:22:15.263 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:15.520 [2024-07-26 18:22:41.453507] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.520 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:15.777 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:16.035 [2024-07-26 18:22:41.966917] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:16.035 [2024-07-26 18:22:41.967205] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.035 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:16.292 malloc0 00:22:16.292 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:16.550 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dpGKG7c5nw 00:22:16.811 [2024-07-26 18:22:42.712565] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:16.811 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1504055 00:22:16.811 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.811 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.811 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1504055 /var/tmp/bdevperf.sock 00:22:16.811 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1504055 ']' 00:22:16.811 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.811 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.811 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.811 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.811 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.811 [2024-07-26 18:22:42.771815] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:16.811 [2024-07-26 18:22:42.771896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504055 ] 00:22:16.811 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.811 [2024-07-26 18:22:42.804610] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.811 [2024-07-26 18:22:42.832132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.811 [2024-07-26 18:22:42.922982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.070 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.070 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:17.070 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dpGKG7c5nw 00:22:17.339 [2024-07-26 18:22:43.257983] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.339 [2024-07-26 18:22:43.258120] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:17.339 TLSTESTn1 00:22:17.339 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:17.596 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:17.596 "subsystems": [ 00:22:17.596 { 00:22:17.596 "subsystem": "keyring", 00:22:17.596 "config": [] 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "subsystem": "iobuf", 00:22:17.596 "config": [ 00:22:17.596 { 00:22:17.596 "method": "iobuf_set_options", 00:22:17.596 "params": { 00:22:17.596 "small_pool_count": 8192, 00:22:17.596 "large_pool_count": 1024, 00:22:17.596 "small_bufsize": 8192, 00:22:17.596 "large_bufsize": 135168 00:22:17.596 } 00:22:17.596 } 00:22:17.596 ] 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "subsystem": "sock", 00:22:17.596 "config": [ 00:22:17.596 { 00:22:17.596 "method": "sock_set_default_impl", 00:22:17.596 "params": { 00:22:17.596 "impl_name": "posix" 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "sock_impl_set_options", 00:22:17.596 "params": { 00:22:17.596 "impl_name": "ssl", 00:22:17.596 "recv_buf_size": 4096, 00:22:17.596 "send_buf_size": 4096, 00:22:17.596 "enable_recv_pipe": true, 00:22:17.596 "enable_quickack": false, 00:22:17.596 "enable_placement_id": 0, 00:22:17.596 "enable_zerocopy_send_server": true, 00:22:17.596 "enable_zerocopy_send_client": false, 00:22:17.596 "zerocopy_threshold": 0, 00:22:17.596 "tls_version": 0, 00:22:17.596 "enable_ktls": false 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "sock_impl_set_options", 00:22:17.596 "params": { 00:22:17.596 "impl_name": "posix", 00:22:17.596 "recv_buf_size": 2097152, 00:22:17.596 "send_buf_size": 2097152, 00:22:17.596 "enable_recv_pipe": true, 00:22:17.596 "enable_quickack": false, 00:22:17.596 "enable_placement_id": 0, 00:22:17.596 "enable_zerocopy_send_server": true, 00:22:17.596 "enable_zerocopy_send_client": false, 00:22:17.596 "zerocopy_threshold": 0, 00:22:17.596 "tls_version": 0, 00:22:17.596 "enable_ktls": false 00:22:17.596 } 00:22:17.596 } 00:22:17.596 ] 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "subsystem": "vmd", 00:22:17.596 "config": [] 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "subsystem": "accel", 00:22:17.596 "config": [ 00:22:17.596 { 00:22:17.596 "method": "accel_set_options", 00:22:17.596 "params": { 00:22:17.596 "small_cache_size": 128, 00:22:17.596 "large_cache_size": 16, 00:22:17.596 "task_count": 2048, 00:22:17.596 "sequence_count": 2048, 00:22:17.596 "buf_count": 2048 00:22:17.596 } 00:22:17.596 } 00:22:17.596 ] 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "subsystem": "bdev", 00:22:17.596 "config": [ 00:22:17.596 { 00:22:17.596 "method": "bdev_set_options", 00:22:17.596 "params": { 00:22:17.596 "bdev_io_pool_size": 65535, 00:22:17.596 "bdev_io_cache_size": 256, 00:22:17.596 "bdev_auto_examine": true, 00:22:17.596 "iobuf_small_cache_size": 128, 00:22:17.596 "iobuf_large_cache_size": 16 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "bdev_raid_set_options", 00:22:17.596 "params": { 00:22:17.596 "process_window_size_kb": 1024, 00:22:17.596 "process_max_bandwidth_mb_sec": 0 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "bdev_iscsi_set_options", 00:22:17.596 "params": { 00:22:17.596 "timeout_sec": 30 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "bdev_nvme_set_options", 00:22:17.596 "params": { 00:22:17.596 "action_on_timeout": "none", 00:22:17.596 "timeout_us": 0, 00:22:17.596 "timeout_admin_us": 0, 00:22:17.596 "keep_alive_timeout_ms": 10000, 00:22:17.596 "arbitration_burst": 0, 00:22:17.596 "low_priority_weight": 0, 00:22:17.596 "medium_priority_weight": 0, 00:22:17.596 "high_priority_weight": 0, 00:22:17.596 "nvme_adminq_poll_period_us": 10000, 00:22:17.596 "nvme_ioq_poll_period_us": 0, 00:22:17.596 "io_queue_requests": 0, 00:22:17.596 "delay_cmd_submit": true, 00:22:17.596 "transport_retry_count": 4, 00:22:17.596 "bdev_retry_count": 3, 00:22:17.596 "transport_ack_timeout": 0, 00:22:17.596 "ctrlr_loss_timeout_sec": 0, 00:22:17.596 "reconnect_delay_sec": 0, 00:22:17.596 "fast_io_fail_timeout_sec": 0, 00:22:17.596 "disable_auto_failback": false, 00:22:17.596 "generate_uuids": false, 00:22:17.596 "transport_tos": 0, 00:22:17.596 "nvme_error_stat": false, 00:22:17.596 "rdma_srq_size": 0, 00:22:17.596 "io_path_stat": false, 00:22:17.596 "allow_accel_sequence": false, 00:22:17.596 "rdma_max_cq_size": 0, 00:22:17.596 "rdma_cm_event_timeout_ms": 0, 00:22:17.596 "dhchap_digests": [ 00:22:17.596 "sha256", 00:22:17.596 "sha384", 00:22:17.596 "sha512" 00:22:17.596 ], 00:22:17.596 "dhchap_dhgroups": [ 00:22:17.596 "null", 00:22:17.596 "ffdhe2048", 00:22:17.596 "ffdhe3072", 00:22:17.596 "ffdhe4096", 00:22:17.596 "ffdhe6144", 00:22:17.596 "ffdhe8192" 00:22:17.596 ] 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "bdev_nvme_set_hotplug", 00:22:17.596 "params": { 00:22:17.596 "period_us": 100000, 00:22:17.596 "enable": false 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "bdev_malloc_create", 00:22:17.596 "params": { 00:22:17.596 "name": "malloc0", 00:22:17.596 "num_blocks": 8192, 00:22:17.596 "block_size": 4096, 00:22:17.596 "physical_block_size": 4096, 00:22:17.596 "uuid": "a6afa68a-9170-4345-922b-691263510a3c", 00:22:17.596 "optimal_io_boundary": 0, 00:22:17.596 "md_size": 0, 00:22:17.596 "dif_type": 0, 00:22:17.596 "dif_is_head_of_md": false, 00:22:17.596 "dif_pi_format": 0 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "bdev_wait_for_examine" 00:22:17.596 } 00:22:17.596 ] 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "subsystem": "nbd", 00:22:17.596 "config": [] 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "subsystem": "scheduler", 00:22:17.596 "config": [ 00:22:17.596 { 00:22:17.596 "method": "framework_set_scheduler", 00:22:17.596 "params": { 00:22:17.596 "name": "static" 00:22:17.596 } 00:22:17.596 } 00:22:17.596 ] 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "subsystem": "nvmf", 00:22:17.596 "config": [ 00:22:17.596 { 00:22:17.596 "method": "nvmf_set_config", 00:22:17.596 "params": { 00:22:17.596 "discovery_filter": "match_any", 00:22:17.596 "admin_cmd_passthru": { 00:22:17.596 "identify_ctrlr": false 00:22:17.596 } 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "nvmf_set_max_subsystems", 00:22:17.596 "params": { 00:22:17.596 "max_subsystems": 1024 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "nvmf_set_crdt", 00:22:17.596 "params": { 00:22:17.596 "crdt1": 0, 00:22:17.596 "crdt2": 0, 00:22:17.596 "crdt3": 0 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "nvmf_create_transport", 00:22:17.596 "params": { 00:22:17.596 "trtype": "TCP", 00:22:17.596 "max_queue_depth": 128, 00:22:17.596 "max_io_qpairs_per_ctrlr": 127, 00:22:17.596 "in_capsule_data_size": 4096, 00:22:17.596 "max_io_size": 131072, 00:22:17.596 "io_unit_size": 131072, 00:22:17.596 "max_aq_depth": 128, 00:22:17.596 "num_shared_buffers": 511, 00:22:17.596 "buf_cache_size": 4294967295, 00:22:17.596 "dif_insert_or_strip": false, 00:22:17.596 "zcopy": false, 00:22:17.596 "c2h_success": false, 00:22:17.596 "sock_priority": 0, 00:22:17.596 "abort_timeout_sec": 1, 00:22:17.596 "ack_timeout": 0, 00:22:17.596 "data_wr_pool_size": 0 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "nvmf_create_subsystem", 00:22:17.596 "params": { 00:22:17.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.596 "allow_any_host": false, 00:22:17.596 "serial_number": "SPDK00000000000001", 00:22:17.596 "model_number": "SPDK bdev Controller", 00:22:17.596 "max_namespaces": 10, 00:22:17.596 "min_cntlid": 1, 00:22:17.596 "max_cntlid": 65519, 00:22:17.596 "ana_reporting": false 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "nvmf_subsystem_add_host", 00:22:17.596 "params": { 00:22:17.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.596 "host": "nqn.2016-06.io.spdk:host1", 00:22:17.596 "psk": "/tmp/tmp.dpGKG7c5nw" 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "nvmf_subsystem_add_ns", 00:22:17.596 "params": { 00:22:17.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.596 "namespace": { 00:22:17.596 "nsid": 1, 00:22:17.596 "bdev_name": "malloc0", 00:22:17.596 "nguid": "A6AFA68A91704345922B691263510A3C", 00:22:17.596 "uuid": "a6afa68a-9170-4345-922b-691263510a3c", 00:22:17.596 "no_auto_visible": false 00:22:17.596 } 00:22:17.596 } 00:22:17.596 }, 00:22:17.596 { 00:22:17.596 "method": "nvmf_subsystem_add_listener", 00:22:17.597 "params": { 00:22:17.597 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.597 "listen_address": { 00:22:17.597 "trtype": "TCP", 00:22:17.597 "adrfam": "IPv4", 00:22:17.597 "traddr": "10.0.0.2", 00:22:17.597 "trsvcid": "4420" 00:22:17.597 }, 00:22:17.597 "secure_channel": true 00:22:17.597 } 00:22:17.597 } 00:22:17.597 ] 00:22:17.597 } 00:22:17.597 ] 00:22:17.597 }' 00:22:17.597 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:18.162 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:18.162 "subsystems": [ 00:22:18.162 { 00:22:18.162 "subsystem": "keyring", 00:22:18.162 "config": [] 00:22:18.162 }, 00:22:18.162 { 00:22:18.162 "subsystem": "iobuf", 00:22:18.162 "config": [ 00:22:18.162 { 00:22:18.162 "method": "iobuf_set_options", 00:22:18.162 "params": { 00:22:18.162 "small_pool_count": 8192, 00:22:18.162 "large_pool_count": 1024, 00:22:18.162 "small_bufsize": 8192, 00:22:18.162 "large_bufsize": 135168 00:22:18.162 } 00:22:18.162 } 00:22:18.162 ] 00:22:18.162 }, 00:22:18.162 { 00:22:18.162 "subsystem": "sock", 00:22:18.162 "config": [ 00:22:18.162 { 00:22:18.162 "method": "sock_set_default_impl", 00:22:18.162 "params": { 00:22:18.162 "impl_name": "posix" 00:22:18.162 } 00:22:18.162 }, 00:22:18.162 { 00:22:18.162 "method": "sock_impl_set_options", 00:22:18.162 "params": { 00:22:18.162 "impl_name": "ssl", 00:22:18.162 "recv_buf_size": 4096, 00:22:18.162 "send_buf_size": 4096, 00:22:18.162 "enable_recv_pipe": true, 00:22:18.162 "enable_quickack": false, 00:22:18.162 "enable_placement_id": 0, 00:22:18.162 "enable_zerocopy_send_server": true, 00:22:18.162 "enable_zerocopy_send_client": false, 00:22:18.162 "zerocopy_threshold": 0, 00:22:18.162 "tls_version": 0, 00:22:18.162 "enable_ktls": false 00:22:18.162 } 00:22:18.162 }, 00:22:18.162 { 00:22:18.162 "method": "sock_impl_set_options", 00:22:18.162 "params": { 00:22:18.162 "impl_name": "posix", 00:22:18.162 "recv_buf_size": 2097152, 00:22:18.162 "send_buf_size": 2097152, 00:22:18.162 "enable_recv_pipe": true, 00:22:18.162 "enable_quickack": false, 00:22:18.162 "enable_placement_id": 0, 00:22:18.162 "enable_zerocopy_send_server": true, 00:22:18.162 "enable_zerocopy_send_client": false, 00:22:18.162 "zerocopy_threshold": 0, 00:22:18.162 "tls_version": 0, 00:22:18.162 "enable_ktls": false 00:22:18.162 } 00:22:18.162 } 00:22:18.162 ] 00:22:18.162 }, 00:22:18.162 { 00:22:18.162 "subsystem": "vmd", 00:22:18.162 "config": [] 00:22:18.162 }, 00:22:18.162 { 00:22:18.162 "subsystem": "accel", 00:22:18.162 "config": [ 00:22:18.162 { 00:22:18.162 "method": "accel_set_options", 00:22:18.162 "params": { 00:22:18.162 "small_cache_size": 128, 00:22:18.162 "large_cache_size": 16, 00:22:18.162 "task_count": 2048, 00:22:18.162 "sequence_count": 2048, 00:22:18.162 "buf_count": 2048 00:22:18.162 } 00:22:18.162 } 00:22:18.162 ] 00:22:18.162 }, 00:22:18.162 { 00:22:18.162 "subsystem": "bdev", 00:22:18.162 "config": [ 00:22:18.162 { 00:22:18.162 "method": "bdev_set_options", 00:22:18.162 "params": { 00:22:18.162 "bdev_io_pool_size": 65535, 00:22:18.162 "bdev_io_cache_size": 256, 00:22:18.162 "bdev_auto_examine": true, 00:22:18.162 "iobuf_small_cache_size": 128, 00:22:18.162 "iobuf_large_cache_size": 16 00:22:18.162 } 00:22:18.162 }, 00:22:18.162 { 00:22:18.162 "method": "bdev_raid_set_options", 00:22:18.162 "params": { 00:22:18.162 "process_window_size_kb": 1024, 00:22:18.162 "process_max_bandwidth_mb_sec": 0 00:22:18.162 } 00:22:18.162 }, 00:22:18.162 { 00:22:18.162 "method": "bdev_iscsi_set_options", 00:22:18.162 "params": { 00:22:18.162 "timeout_sec": 30 00:22:18.162 } 00:22:18.162 }, 00:22:18.162 { 00:22:18.162 "method": "bdev_nvme_set_options", 00:22:18.162 "params": { 00:22:18.162 "action_on_timeout": "none", 00:22:18.162 "timeout_us": 0, 00:22:18.162 "timeout_admin_us": 0, 00:22:18.162 "keep_alive_timeout_ms": 10000, 00:22:18.162 "arbitration_burst": 0, 00:22:18.162 "low_priority_weight": 0, 00:22:18.162 "medium_priority_weight": 0, 00:22:18.162 "high_priority_weight": 0, 00:22:18.162 "nvme_adminq_poll_period_us": 10000, 00:22:18.162 "nvme_ioq_poll_period_us": 0, 00:22:18.162 "io_queue_requests": 512, 00:22:18.162 "delay_cmd_submit": true, 00:22:18.162 "transport_retry_count": 4, 00:22:18.162 "bdev_retry_count": 3, 00:22:18.162 "transport_ack_timeout": 0, 00:22:18.162 "ctrlr_loss_timeout_sec": 0, 00:22:18.162 "reconnect_delay_sec": 0, 00:22:18.162 "fast_io_fail_timeout_sec": 0, 00:22:18.162 "disable_auto_failback": false, 00:22:18.162 "generate_uuids": false, 00:22:18.162 "transport_tos": 0, 00:22:18.162 "nvme_error_stat": false, 00:22:18.162 "rdma_srq_size": 0, 00:22:18.162 "io_path_stat": false, 00:22:18.162 "allow_accel_sequence": false, 00:22:18.162 "rdma_max_cq_size": 0, 00:22:18.162 "rdma_cm_event_timeout_ms": 0, 00:22:18.162 "dhchap_digests": [ 00:22:18.162 "sha256", 00:22:18.162 "sha384", 00:22:18.162 "sha512" 00:22:18.162 ], 00:22:18.163 "dhchap_dhgroups": [ 00:22:18.163 "null", 00:22:18.163 "ffdhe2048", 00:22:18.163 "ffdhe3072", 00:22:18.163 "ffdhe4096", 00:22:18.163 "ffdhe6144", 00:22:18.163 "ffdhe8192" 00:22:18.163 ] 00:22:18.163 } 00:22:18.163 }, 00:22:18.163 { 00:22:18.163 "method": "bdev_nvme_attach_controller", 00:22:18.163 "params": { 00:22:18.163 "name": "TLSTEST", 00:22:18.163 "trtype": "TCP", 00:22:18.163 "adrfam": "IPv4", 00:22:18.163 "traddr": "10.0.0.2", 00:22:18.163 "trsvcid": "4420", 00:22:18.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.163 "prchk_reftag": false, 00:22:18.163 "prchk_guard": false, 00:22:18.163 "ctrlr_loss_timeout_sec": 0, 00:22:18.163 "reconnect_delay_sec": 0, 00:22:18.163 "fast_io_fail_timeout_sec": 0, 00:22:18.163 "psk": "/tmp/tmp.dpGKG7c5nw", 00:22:18.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:18.163 "hdgst": false, 00:22:18.163 "ddgst": false 00:22:18.163 } 00:22:18.163 }, 00:22:18.163 { 00:22:18.163 "method": "bdev_nvme_set_hotplug", 00:22:18.163 "params": { 00:22:18.163 "period_us": 100000, 00:22:18.163 "enable": false 00:22:18.163 } 00:22:18.163 }, 00:22:18.163 { 00:22:18.163 "method": "bdev_wait_for_examine" 00:22:18.163 } 00:22:18.163 ] 00:22:18.163 }, 00:22:18.163 { 00:22:18.163 "subsystem": "nbd", 00:22:18.163 "config": [] 00:22:18.163 } 00:22:18.163 ] 00:22:18.163 }' 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1504055 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1504055 ']' 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1504055 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1504055 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1504055' 00:22:18.163 killing process with pid 1504055 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1504055 00:22:18.163 Received shutdown signal, test time was about 10.000000 seconds 00:22:18.163 00:22:18.163 Latency(us) 00:22:18.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.163 =================================================================================================================== 00:22:18.163 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:18.163 [2024-07-26 18:22:44.061029] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1504055 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1503776 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1503776 ']' 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1503776 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:18.163 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503776 00:22:18.424 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:18.424 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:18.424 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503776' 00:22:18.424 killing process with pid 1503776 00:22:18.424 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1503776 00:22:18.424 [2024-07-26 18:22:44.314240] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:18.424 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1503776 00:22:18.706 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:18.706 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:18.706 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:18.706 "subsystems": [ 00:22:18.706 { 00:22:18.706 "subsystem": "keyring", 00:22:18.706 "config": [] 00:22:18.706 }, 00:22:18.706 { 00:22:18.706 "subsystem": "iobuf", 00:22:18.706 "config": [ 00:22:18.706 { 00:22:18.706 "method": "iobuf_set_options", 00:22:18.706 "params": { 00:22:18.706 "small_pool_count": 8192, 00:22:18.706 "large_pool_count": 1024, 00:22:18.706 "small_bufsize": 8192, 00:22:18.706 "large_bufsize": 135168 00:22:18.706 } 00:22:18.706 } 00:22:18.706 ] 00:22:18.706 }, 00:22:18.706 { 00:22:18.706 "subsystem": "sock", 00:22:18.706 "config": [ 00:22:18.706 { 00:22:18.706 "method": "sock_set_default_impl", 00:22:18.706 "params": { 00:22:18.706 "impl_name": "posix" 00:22:18.706 } 00:22:18.706 }, 00:22:18.706 { 00:22:18.706 "method": "sock_impl_set_options", 00:22:18.706 "params": { 00:22:18.706 "impl_name": "ssl", 00:22:18.706 "recv_buf_size": 4096, 00:22:18.706 "send_buf_size": 4096, 00:22:18.706 "enable_recv_pipe": true, 00:22:18.706 "enable_quickack": false, 00:22:18.706 "enable_placement_id": 0, 00:22:18.706 "enable_zerocopy_send_server": true, 00:22:18.706 "enable_zerocopy_send_client": false, 00:22:18.706 "zerocopy_threshold": 0, 00:22:18.706 "tls_version": 0, 00:22:18.706 "enable_ktls": false 00:22:18.706 } 00:22:18.706 }, 00:22:18.706 { 00:22:18.706 "method": "sock_impl_set_options", 00:22:18.706 "params": { 00:22:18.706 "impl_name": "posix", 00:22:18.706 "recv_buf_size": 2097152, 00:22:18.706 "send_buf_size": 2097152, 00:22:18.706 "enable_recv_pipe": true, 00:22:18.706 "enable_quickack": false, 00:22:18.706 "enable_placement_id": 0, 00:22:18.706 "enable_zerocopy_send_server": true, 00:22:18.706 "enable_zerocopy_send_client": false, 00:22:18.706 "zerocopy_threshold": 0, 00:22:18.706 "tls_version": 0, 00:22:18.706 "enable_ktls": false 00:22:18.706 } 00:22:18.706 } 00:22:18.706 ] 00:22:18.706 }, 00:22:18.706 { 00:22:18.706 "subsystem": "vmd", 00:22:18.706 "config": [] 00:22:18.706 }, 00:22:18.706 { 00:22:18.706 "subsystem": "accel", 00:22:18.706 "config": [ 00:22:18.706 { 00:22:18.706 "method": "accel_set_options", 00:22:18.706 "params": { 00:22:18.706 "small_cache_size": 128, 00:22:18.706 "large_cache_size": 16, 00:22:18.706 "task_count": 2048, 00:22:18.706 "sequence_count": 2048, 00:22:18.707 "buf_count": 2048 00:22:18.707 } 00:22:18.707 } 00:22:18.707 ] 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "subsystem": "bdev", 00:22:18.707 "config": [ 00:22:18.707 { 00:22:18.707 "method": "bdev_set_options", 00:22:18.707 "params": { 00:22:18.707 "bdev_io_pool_size": 65535, 00:22:18.707 "bdev_io_cache_size": 256, 00:22:18.707 "bdev_auto_examine": true, 00:22:18.707 "iobuf_small_cache_size": 128, 00:22:18.707 "iobuf_large_cache_size": 16 00:22:18.707 } 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "method": "bdev_raid_set_options", 00:22:18.707 "params": { 00:22:18.707 "process_window_size_kb": 1024, 00:22:18.707 "process_max_bandwidth_mb_sec": 0 00:22:18.707 } 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "method": "bdev_iscsi_set_options", 00:22:18.707 "params": { 00:22:18.707 "timeout_sec": 30 00:22:18.707 } 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "method": "bdev_nvme_set_options", 00:22:18.707 "params": { 00:22:18.707 "action_on_timeout": "none", 00:22:18.707 "timeout_us": 0, 00:22:18.707 "timeout_admin_us": 0, 00:22:18.707 "keep_alive_timeout_ms": 10000, 00:22:18.707 "arbitration_burst": 0, 00:22:18.707 "low_priority_weight": 0, 00:22:18.707 "medium_priority_weight": 0, 00:22:18.707 "high_priority_weight": 0, 00:22:18.707 "nvme_adminq_poll_period_us": 10000, 00:22:18.707 "nvme_ioq_poll_period_us": 0, 00:22:18.707 "io_queue_requests": 0, 00:22:18.707 "delay_cmd_submit": true, 00:22:18.707 "transport_retry_count": 4, 00:22:18.707 "bdev_retry_count": 3, 00:22:18.707 "transport_ack_timeout": 0, 00:22:18.707 "ctrlr_loss_timeout_sec": 0, 00:22:18.707 "reconnect_delay_sec": 0, 00:22:18.707 "fast_io_fail_timeout_sec": 0, 00:22:18.707 "disable_auto_failback": false, 00:22:18.707 "generate_uuids": false, 00:22:18.707 "transport_tos": 0, 00:22:18.707 "nvme_error_stat": false, 00:22:18.707 "rdma_srq_size": 0, 00:22:18.707 "io_path_stat": false, 00:22:18.707 "allow_accel_sequence": false, 00:22:18.707 "rdma_max_cq_size": 0, 00:22:18.707 "rdma_cm_event_timeout_ms": 0, 00:22:18.707 "dhchap_digests": [ 00:22:18.707 "sha256", 00:22:18.707 "sha384", 00:22:18.707 "sha512" 00:22:18.707 ], 00:22:18.707 "dhchap_dhgroups": [ 00:22:18.707 "null", 00:22:18.707 "ffdhe2048", 00:22:18.707 "ffdhe3072", 00:22:18.707 "ffdhe4096", 00:22:18.707 "ffdhe6144", 00:22:18.707 "ffdhe8192" 00:22:18.707 ] 00:22:18.707 } 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "method": "bdev_nvme_set_hotplug", 00:22:18.707 "params": { 00:22:18.707 "period_us": 100000, 00:22:18.707 "enable": false 00:22:18.707 } 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "method": "bdev_malloc_create", 00:22:18.707 "params": { 00:22:18.707 "name": "malloc0", 00:22:18.707 "num_blocks": 8192, 00:22:18.707 "block_size": 4096, 00:22:18.707 "physical_block_size": 4096, 00:22:18.707 "uuid": "a6afa68a-9170-4345-922b-691263510a3c", 00:22:18.707 "optimal_io_boundary": 0, 00:22:18.707 "md_size": 0, 00:22:18.707 "dif_type": 0, 00:22:18.707 "dif_is_head_of_md": false, 00:22:18.707 "dif_pi_format": 0 00:22:18.707 } 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "method": "bdev_wait_for_examine" 00:22:18.707 } 00:22:18.707 ] 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "subsystem": "nbd", 00:22:18.707 "config": [] 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "subsystem": "scheduler", 00:22:18.707 "config": [ 00:22:18.707 { 00:22:18.707 "method": "framework_set_scheduler", 00:22:18.707 "params": { 00:22:18.707 "name": "static" 00:22:18.707 } 00:22:18.707 } 00:22:18.707 ] 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "subsystem": "nvmf", 00:22:18.707 "config": [ 00:22:18.707 { 00:22:18.707 "method": "nvmf_set_config", 00:22:18.707 "params": { 00:22:18.707 "discovery_filter": "match_any", 00:22:18.707 "admin_cmd_passthru": { 00:22:18.707 "identify_ctrlr": false 00:22:18.707 } 00:22:18.707 } 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "method": "nvmf_set_max_subsystems", 00:22:18.707 "params": { 00:22:18.707 "max_subsystems": 1024 00:22:18.707 } 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "method": "nvmf_set_crdt", 00:22:18.707 "params": { 00:22:18.707 "crdt1": 0, 00:22:18.707 "crdt2": 0, 00:22:18.707 "crdt3": 0 00:22:18.707 } 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "method": "nvmf_create_transport", 00:22:18.707 "params": { 00:22:18.707 "trtype": "TCP", 00:22:18.707 "max_queue_depth": 128, 00:22:18.707 "max_io_qpairs_per_ctrlr": 127, 00:22:18.707 "in_capsule_data_size": 4096, 00:22:18.707 "max_io_size": 131072, 00:22:18.707 "io_unit_size": 131072, 00:22:18.707 "max_aq_depth": 128, 00:22:18.707 "num_shared_buffers": 511, 00:22:18.707 "buf_cache_size": 4294967295, 00:22:18.707 "dif_insert_or_strip": false, 00:22:18.707 "zcopy": false, 00:22:18.707 "c2h_success": false, 00:22:18.707 "sock_priority": 0, 00:22:18.707 "abort_timeout_sec": 1, 00:22:18.707 "ack_timeout": 0, 00:22:18.707 "data_wr_pool_size": 0 00:22:18.707 } 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "method": "nvmf_create_subsystem", 00:22:18.707 "params": { 00:22:18.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.707 "allow_any_host": false, 00:22:18.707 "serial_number": "SPDK00000000000001", 00:22:18.707 "model_number": "SPDK bdev Controller", 00:22:18.707 "max_namespaces": 10, 00:22:18.707 "min_cntlid": 1, 00:22:18.707 "max_cntlid": 65519, 00:22:18.707 "ana_reporting": false 00:22:18.707 } 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "method": "nvmf_subsystem_add_host", 00:22:18.707 "params": { 00:22:18.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.707 "host": "nqn.2016-06.io.spdk:host1", 00:22:18.707 "psk": "/tmp/tmp.dpGKG7c5nw" 00:22:18.707 } 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "method": "nvmf_subsystem_add_ns", 00:22:18.707 "params": { 00:22:18.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.707 "namespace": { 00:22:18.707 "nsid": 1, 00:22:18.707 "bdev_name": "malloc0", 00:22:18.707 "nguid": "A6AFA68A91704345922B691263510A3C", 00:22:18.707 "uuid": "a6afa68a-9170-4345-922b-691263510a3c", 00:22:18.707 "no_auto_visible": false 00:22:18.707 } 00:22:18.707 } 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "method": "nvmf_subsystem_add_listener", 00:22:18.707 "params": { 00:22:18.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.707 "listen_address": { 00:22:18.707 "trtype": "TCP", 00:22:18.707 "adrfam": "IPv4", 00:22:18.707 "traddr": "10.0.0.2", 00:22:18.707 "trsvcid": "4420" 00:22:18.707 }, 00:22:18.707 "secure_channel": true 00:22:18.707 } 00:22:18.707 } 00:22:18.707 ] 00:22:18.707 } 00:22:18.707 ] 00:22:18.707 }' 00:22:18.707 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:18.707 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.707 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1504216 00:22:18.707 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:18.707 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1504216 00:22:18.707 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1504216 ']' 00:22:18.708 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.708 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:18.708 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.708 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:18.708 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.708 [2024-07-26 18:22:44.629487] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:18.708 [2024-07-26 18:22:44.629569] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.708 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.708 [2024-07-26 18:22:44.668647] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:18.708 [2024-07-26 18:22:44.694821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.708 [2024-07-26 18:22:44.782262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.708 [2024-07-26 18:22:44.782323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.708 [2024-07-26 18:22:44.782337] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.708 [2024-07-26 18:22:44.782349] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.708 [2024-07-26 18:22:44.782358] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.708 [2024-07-26 18:22:44.782449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.970 [2024-07-26 18:22:45.017395] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.970 [2024-07-26 18:22:45.045791] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:18.970 [2024-07-26 18:22:45.061859] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:18.970 [2024-07-26 18:22:45.062121] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.535 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.535 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:19.535 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.535 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:19.535 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.535 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.535 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1504369 00:22:19.535 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1504369 /var/tmp/bdevperf.sock 00:22:19.535 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1504369 ']' 00:22:19.535 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.535 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.535 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:19.536 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.536 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:19.536 "subsystems": [ 00:22:19.536 { 00:22:19.536 "subsystem": "keyring", 00:22:19.536 "config": [] 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "subsystem": "iobuf", 00:22:19.536 "config": [ 00:22:19.536 { 00:22:19.536 "method": "iobuf_set_options", 00:22:19.536 "params": { 00:22:19.536 "small_pool_count": 8192, 00:22:19.536 "large_pool_count": 1024, 00:22:19.536 "small_bufsize": 8192, 00:22:19.536 "large_bufsize": 135168 00:22:19.536 } 00:22:19.536 } 00:22:19.536 ] 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "subsystem": "sock", 00:22:19.536 "config": [ 00:22:19.536 { 00:22:19.536 "method": "sock_set_default_impl", 00:22:19.536 "params": { 00:22:19.536 "impl_name": "posix" 00:22:19.536 } 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "method": "sock_impl_set_options", 00:22:19.536 "params": { 00:22:19.536 "impl_name": "ssl", 00:22:19.536 "recv_buf_size": 4096, 00:22:19.536 "send_buf_size": 4096, 00:22:19.536 "enable_recv_pipe": true, 00:22:19.536 "enable_quickack": false, 00:22:19.536 "enable_placement_id": 0, 00:22:19.536 "enable_zerocopy_send_server": true, 00:22:19.536 "enable_zerocopy_send_client": false, 00:22:19.536 "zerocopy_threshold": 0, 00:22:19.536 "tls_version": 0, 00:22:19.536 "enable_ktls": false 00:22:19.536 } 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "method": "sock_impl_set_options", 00:22:19.536 "params": { 00:22:19.536 "impl_name": "posix", 00:22:19.536 "recv_buf_size": 2097152, 00:22:19.536 "send_buf_size": 2097152, 00:22:19.536 "enable_recv_pipe": true, 00:22:19.536 "enable_quickack": false, 00:22:19.536 "enable_placement_id": 0, 00:22:19.536 "enable_zerocopy_send_server": true, 00:22:19.536 "enable_zerocopy_send_client": false, 00:22:19.536 "zerocopy_threshold": 0, 00:22:19.536 "tls_version": 0, 00:22:19.536 "enable_ktls": false 00:22:19.536 } 00:22:19.536 } 00:22:19.536 ] 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "subsystem": "vmd", 00:22:19.536 "config": [] 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "subsystem": "accel", 00:22:19.536 "config": [ 00:22:19.536 { 00:22:19.536 "method": "accel_set_options", 00:22:19.536 "params": { 00:22:19.536 "small_cache_size": 128, 00:22:19.536 "large_cache_size": 16, 00:22:19.536 "task_count": 2048, 00:22:19.536 "sequence_count": 2048, 00:22:19.536 "buf_count": 2048 00:22:19.536 } 00:22:19.536 } 00:22:19.536 ] 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "subsystem": "bdev", 00:22:19.536 "config": [ 00:22:19.536 { 00:22:19.536 "method": "bdev_set_options", 00:22:19.536 "params": { 00:22:19.536 "bdev_io_pool_size": 65535, 00:22:19.536 "bdev_io_cache_size": 256, 00:22:19.536 "bdev_auto_examine": true, 00:22:19.536 "iobuf_small_cache_size": 128, 00:22:19.536 "iobuf_large_cache_size": 16 00:22:19.536 } 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "method": "bdev_raid_set_options", 00:22:19.536 "params": { 00:22:19.536 "process_window_size_kb": 1024, 00:22:19.536 "process_max_bandwidth_mb_sec": 0 00:22:19.536 } 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "method": "bdev_iscsi_set_options", 00:22:19.536 "params": { 00:22:19.536 "timeout_sec": 30 00:22:19.536 } 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "method": "bdev_nvme_set_options", 00:22:19.536 "params": { 00:22:19.536 "action_on_timeout": "none", 00:22:19.536 "timeout_us": 0, 00:22:19.536 "timeout_admin_us": 0, 00:22:19.536 "keep_alive_timeout_ms": 10000, 00:22:19.536 "arbitration_burst": 0, 00:22:19.536 "low_priority_weight": 0, 00:22:19.536 "medium_priority_weight": 0, 00:22:19.536 "high_priority_weight": 0, 00:22:19.536 "nvme_adminq_poll_period_us": 10000, 00:22:19.536 "nvme_ioq_poll_period_us": 0, 00:22:19.536 "io_queue_requests": 512, 00:22:19.536 "delay_cmd_submit": true, 00:22:19.536 "transport_retry_count": 4, 00:22:19.536 "bdev_retry_count": 3, 00:22:19.536 "transport_ack_timeout": 0, 00:22:19.536 "ctrlr_loss_timeout_sec": 0, 00:22:19.536 "reconnect_delay_sec": 0, 00:22:19.536 "fast_io_fail_timeout_sec": 0, 00:22:19.536 "disable_auto_failback": false, 00:22:19.536 "generate_uuids": false, 00:22:19.536 "transport_tos": 0, 00:22:19.536 "nvme_error_stat": false, 00:22:19.536 "rdma_srq_size": 0, 00:22:19.536 "io_path_stat": false, 00:22:19.536 "allow_accel_sequence": false, 00:22:19.536 "rdma_max_cq_size": 0, 00:22:19.536 "rdma_cm_event_timeout_ms": 0, 00:22:19.536 "dhchap_digests": [ 00:22:19.536 "sha256", 00:22:19.536 "sha384", 00:22:19.536 "sha512" 00:22:19.536 ], 00:22:19.536 "dhchap_dhgroups": [ 00:22:19.536 "null", 00:22:19.536 "ffdhe2048", 00:22:19.536 "ffdhe3072", 00:22:19.536 "ffdhe4096", 00:22:19.536 "ffdhe6144", 00:22:19.536 "ffdhe8192" 00:22:19.536 ] 00:22:19.536 } 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "method": "bdev_nvme_attach_controller", 00:22:19.536 "params": { 00:22:19.536 "name": "TLSTEST", 00:22:19.536 "trtype": "TCP", 00:22:19.536 "adrfam": "IPv4", 00:22:19.536 "traddr": "10.0.0.2", 00:22:19.536 "trsvcid": "4420", 00:22:19.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.536 "prchk_reftag": false, 00:22:19.536 "prchk_guard": false, 00:22:19.536 "ctrlr_loss_timeout_sec": 0, 00:22:19.536 "reconnect_delay_sec": 0, 00:22:19.536 "fast_io_fail_timeout_sec": 0, 00:22:19.536 "psk": "/tmp/tmp.dpGKG7c5nw", 00:22:19.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:19.536 "hdgst": false, 00:22:19.536 "ddgst": false 00:22:19.536 } 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "method": "bdev_nvme_set_hotplug", 00:22:19.536 "params": { 00:22:19.536 "period_us": 100000, 00:22:19.536 "enable": false 00:22:19.536 } 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "method": "bdev_wait_for_examine" 00:22:19.536 } 00:22:19.536 ] 00:22:19.536 }, 00:22:19.536 { 00:22:19.536 "subsystem": "nbd", 00:22:19.536 "config": [] 00:22:19.536 } 00:22:19.536 ] 00:22:19.536 }' 00:22:19.536 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.536 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.536 [2024-07-26 18:22:45.641822] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:19.536 [2024-07-26 18:22:45.641906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504369 ] 00:22:19.536 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.795 [2024-07-26 18:22:45.680001] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:19.795 [2024-07-26 18:22:45.705938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.796 [2024-07-26 18:22:45.788458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.056 [2024-07-26 18:22:45.955764] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.056 [2024-07-26 18:22:45.955875] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:20.622 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:20.622 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:20.622 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:20.622 Running I/O for 10 seconds... 00:22:32.828 00:22:32.828 Latency(us) 00:22:32.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.828 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:32.828 Verification LBA range: start 0x0 length 0x2000 00:22:32.828 TLSTESTn1 : 10.05 2400.26 9.38 0.00 0.00 53176.39 8641.04 98643.82 00:22:32.828 =================================================================================================================== 00:22:32.828 Total : 2400.26 9.38 0.00 0.00 53176.39 8641.04 98643.82 00:22:32.828 0 00:22:32.828 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.828 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1504369 00:22:32.828 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1504369 ']' 00:22:32.828 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1504369 00:22:32.828 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:32.828 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.828 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1504369 00:22:32.828 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:32.828 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:32.828 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1504369' 00:22:32.828 killing process with pid 1504369 00:22:32.828 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1504369 00:22:32.828 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.828 00:22:32.828 Latency(us) 00:22:32.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.828 =================================================================================================================== 00:22:32.828 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.828 [2024-07-26 18:22:56.860582] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:32.828 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1504369 00:22:32.828 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1504216 00:22:32.828 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1504216 ']' 00:22:32.828 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1504216 00:22:32.828 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1504216 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1504216' 00:22:32.829 killing process with pid 1504216 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1504216 00:22:32.829 [2024-07-26 18:22:57.113553] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1504216 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1505706 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1505706 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1505706 ']' 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.829 [2024-07-26 18:22:57.416890] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:32.829 [2024-07-26 18:22:57.416969] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.829 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.829 [2024-07-26 18:22:57.455669] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:32.829 [2024-07-26 18:22:57.482375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.829 [2024-07-26 18:22:57.570466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.829 [2024-07-26 18:22:57.570523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.829 [2024-07-26 18:22:57.570537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.829 [2024-07-26 18:22:57.570549] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.829 [2024-07-26 18:22:57.570559] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.829 [2024-07-26 18:22:57.570600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.dpGKG7c5nw 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dpGKG7c5nw 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:32.829 [2024-07-26 18:22:57.946978] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.829 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:32.829 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:32.829 [2024-07-26 18:22:58.492471] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.829 [2024-07-26 18:22:58.492722] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.829 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:32.829 malloc0 00:22:32.829 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:33.087 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dpGKG7c5nw 00:22:33.345 [2024-07-26 18:22:59.310534] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:33.345 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1505982 00:22:33.345 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:33.345 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:33.345 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1505982 /var/tmp/bdevperf.sock 00:22:33.345 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1505982 ']' 00:22:33.345 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.345 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.345 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.345 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.345 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.345 [2024-07-26 18:22:59.378033] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:33.345 [2024-07-26 18:22:59.378147] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505982 ] 00:22:33.345 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.345 [2024-07-26 18:22:59.419000] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:33.345 [2024-07-26 18:22:59.450910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.604 [2024-07-26 18:22:59.545466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.604 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.604 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:33.604 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dpGKG7c5nw 00:22:33.862 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:34.120 [2024-07-26 18:23:00.143260] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.120 nvme0n1 00:22:34.120 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:34.377 Running I/O for 1 seconds... 00:22:35.311 00:22:35.311 Latency(us) 00:22:35.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.311 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:35.311 Verification LBA range: start 0x0 length 0x2000 00:22:35.311 nvme0n1 : 1.06 2235.75 8.73 0.00 0.00 55952.34 7427.41 83497.72 00:22:35.311 =================================================================================================================== 00:22:35.311 Total : 2235.75 8.73 0.00 0.00 55952.34 7427.41 83497.72 00:22:35.311 0 00:22:35.311 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1505982 00:22:35.311 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1505982 ']' 00:22:35.311 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1505982 00:22:35.311 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:35.311 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:35.311 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1505982 00:22:35.570 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:35.570 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:35.570 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1505982' 00:22:35.570 killing process with pid 1505982 00:22:35.570 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1505982 00:22:35.570 Received shutdown signal, test time was about 1.000000 seconds 00:22:35.570 00:22:35.570 Latency(us) 00:22:35.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.570 =================================================================================================================== 00:22:35.570 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.570 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1505982 00:22:35.570 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1505706 00:22:35.570 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1505706 ']' 00:22:35.570 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1505706 00:22:35.570 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:35.570 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:35.570 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1505706 00:22:35.828 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:35.828 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:35.828 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1505706' 00:22:35.828 killing process with pid 1505706 00:22:35.828 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1505706 00:22:35.828 [2024-07-26 18:23:01.717455] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:35.828 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1505706 00:22:35.828 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:35.828 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:35.828 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.828 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.828 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1506277 00:22:35.829 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:35.829 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1506277 00:22:35.829 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1506277 ']' 00:22:35.829 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.829 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:35.829 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.829 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:35.829 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.087 [2024-07-26 18:23:02.011762] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:36.087 [2024-07-26 18:23:02.011847] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.087 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.087 [2024-07-26 18:23:02.055903] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:36.087 [2024-07-26 18:23:02.083957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.087 [2024-07-26 18:23:02.173666] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.087 [2024-07-26 18:23:02.173725] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.087 [2024-07-26 18:23:02.173739] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.087 [2024-07-26 18:23:02.173750] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.087 [2024-07-26 18:23:02.173760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.087 [2024-07-26 18:23:02.173791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.346 [2024-07-26 18:23:02.318526] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.346 malloc0 00:22:36.346 [2024-07-26 18:23:02.350041] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:36.346 [2024-07-26 18:23:02.360290] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1506406 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1506406 /var/tmp/bdevperf.sock 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1506406 ']' 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:36.346 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.346 [2024-07-26 18:23:02.426803] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:36.346 [2024-07-26 18:23:02.426865] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506406 ] 00:22:36.346 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.346 [2024-07-26 18:23:02.458803] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:36.346 [2024-07-26 18:23:02.489312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.605 [2024-07-26 18:23:02.579787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.605 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:36.605 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:36.605 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dpGKG7c5nw 00:22:36.863 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:37.120 [2024-07-26 18:23:03.256356] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.377 nvme0n1 00:22:37.377 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:37.377 Running I/O for 1 seconds... 00:22:38.745 00:22:38.745 Latency(us) 00:22:38.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.745 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:38.745 Verification LBA range: start 0x0 length 0x2000 00:22:38.745 nvme0n1 : 1.07 1602.21 6.26 0.00 0.00 77804.63 11553.75 120392.06 00:22:38.745 =================================================================================================================== 00:22:38.745 Total : 1602.21 6.26 0.00 0.00 77804.63 11553.75 120392.06 00:22:38.745 0 00:22:38.745 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:38.745 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.745 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.745 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.745 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:38.745 "subsystems": [ 00:22:38.745 { 00:22:38.745 "subsystem": "keyring", 00:22:38.745 "config": [ 00:22:38.745 { 00:22:38.745 "method": "keyring_file_add_key", 00:22:38.745 "params": { 00:22:38.745 "name": "key0", 00:22:38.745 "path": "/tmp/tmp.dpGKG7c5nw" 00:22:38.745 } 00:22:38.745 } 00:22:38.745 ] 00:22:38.745 }, 00:22:38.745 { 00:22:38.745 "subsystem": "iobuf", 00:22:38.745 "config": [ 00:22:38.745 { 00:22:38.745 "method": "iobuf_set_options", 00:22:38.745 "params": { 00:22:38.745 "small_pool_count": 8192, 00:22:38.745 "large_pool_count": 1024, 00:22:38.745 "small_bufsize": 8192, 00:22:38.745 "large_bufsize": 135168 00:22:38.745 } 00:22:38.745 } 00:22:38.745 ] 00:22:38.745 }, 00:22:38.745 { 00:22:38.745 "subsystem": "sock", 00:22:38.745 "config": [ 00:22:38.745 { 00:22:38.745 "method": "sock_set_default_impl", 00:22:38.745 "params": { 00:22:38.745 "impl_name": "posix" 00:22:38.745 } 00:22:38.745 }, 00:22:38.745 { 00:22:38.745 "method": "sock_impl_set_options", 00:22:38.745 "params": { 00:22:38.745 "impl_name": "ssl", 00:22:38.745 "recv_buf_size": 4096, 00:22:38.745 "send_buf_size": 4096, 00:22:38.745 "enable_recv_pipe": true, 00:22:38.745 "enable_quickack": false, 00:22:38.745 "enable_placement_id": 0, 00:22:38.745 "enable_zerocopy_send_server": true, 00:22:38.746 "enable_zerocopy_send_client": false, 00:22:38.746 "zerocopy_threshold": 0, 00:22:38.746 "tls_version": 0, 00:22:38.746 "enable_ktls": false 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "sock_impl_set_options", 00:22:38.746 "params": { 00:22:38.746 "impl_name": "posix", 00:22:38.746 "recv_buf_size": 2097152, 00:22:38.746 "send_buf_size": 2097152, 00:22:38.746 "enable_recv_pipe": true, 00:22:38.746 "enable_quickack": false, 00:22:38.746 "enable_placement_id": 0, 00:22:38.746 "enable_zerocopy_send_server": true, 00:22:38.746 "enable_zerocopy_send_client": false, 00:22:38.746 "zerocopy_threshold": 0, 00:22:38.746 "tls_version": 0, 00:22:38.746 "enable_ktls": false 00:22:38.746 } 00:22:38.746 } 00:22:38.746 ] 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "subsystem": "vmd", 00:22:38.746 "config": [] 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "subsystem": "accel", 00:22:38.746 "config": [ 00:22:38.746 { 00:22:38.746 "method": "accel_set_options", 00:22:38.746 "params": { 00:22:38.746 "small_cache_size": 128, 00:22:38.746 "large_cache_size": 16, 00:22:38.746 "task_count": 2048, 00:22:38.746 "sequence_count": 2048, 00:22:38.746 "buf_count": 2048 00:22:38.746 } 00:22:38.746 } 00:22:38.746 ] 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "subsystem": "bdev", 00:22:38.746 "config": [ 00:22:38.746 { 00:22:38.746 "method": "bdev_set_options", 00:22:38.746 "params": { 00:22:38.746 "bdev_io_pool_size": 65535, 00:22:38.746 "bdev_io_cache_size": 256, 00:22:38.746 "bdev_auto_examine": true, 00:22:38.746 "iobuf_small_cache_size": 128, 00:22:38.746 "iobuf_large_cache_size": 16 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "bdev_raid_set_options", 00:22:38.746 "params": { 00:22:38.746 "process_window_size_kb": 1024, 00:22:38.746 "process_max_bandwidth_mb_sec": 0 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "bdev_iscsi_set_options", 00:22:38.746 "params": { 00:22:38.746 "timeout_sec": 30 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "bdev_nvme_set_options", 00:22:38.746 "params": { 00:22:38.746 "action_on_timeout": "none", 00:22:38.746 "timeout_us": 0, 00:22:38.746 "timeout_admin_us": 0, 00:22:38.746 "keep_alive_timeout_ms": 10000, 00:22:38.746 "arbitration_burst": 0, 00:22:38.746 "low_priority_weight": 0, 00:22:38.746 "medium_priority_weight": 0, 00:22:38.746 "high_priority_weight": 0, 00:22:38.746 "nvme_adminq_poll_period_us": 10000, 00:22:38.746 "nvme_ioq_poll_period_us": 0, 00:22:38.746 "io_queue_requests": 0, 00:22:38.746 "delay_cmd_submit": true, 00:22:38.746 "transport_retry_count": 4, 00:22:38.746 "bdev_retry_count": 3, 00:22:38.746 "transport_ack_timeout": 0, 00:22:38.746 "ctrlr_loss_timeout_sec": 0, 00:22:38.746 "reconnect_delay_sec": 0, 00:22:38.746 "fast_io_fail_timeout_sec": 0, 00:22:38.746 "disable_auto_failback": false, 00:22:38.746 "generate_uuids": false, 00:22:38.746 "transport_tos": 0, 00:22:38.746 "nvme_error_stat": false, 00:22:38.746 "rdma_srq_size": 0, 00:22:38.746 "io_path_stat": false, 00:22:38.746 "allow_accel_sequence": false, 00:22:38.746 "rdma_max_cq_size": 0, 00:22:38.746 "rdma_cm_event_timeout_ms": 0, 00:22:38.746 "dhchap_digests": [ 00:22:38.746 "sha256", 00:22:38.746 "sha384", 00:22:38.746 "sha512" 00:22:38.746 ], 00:22:38.746 "dhchap_dhgroups": [ 00:22:38.746 "null", 00:22:38.746 "ffdhe2048", 00:22:38.746 "ffdhe3072", 00:22:38.746 "ffdhe4096", 00:22:38.746 "ffdhe6144", 00:22:38.746 "ffdhe8192" 00:22:38.746 ] 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "bdev_nvme_set_hotplug", 00:22:38.746 "params": { 00:22:38.746 "period_us": 100000, 00:22:38.746 "enable": false 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "bdev_malloc_create", 00:22:38.746 "params": { 00:22:38.746 "name": "malloc0", 00:22:38.746 "num_blocks": 8192, 00:22:38.746 "block_size": 4096, 00:22:38.746 "physical_block_size": 4096, 00:22:38.746 "uuid": "2db1d088-48f3-4018-b4a7-064144783ff9", 00:22:38.746 "optimal_io_boundary": 0, 00:22:38.746 "md_size": 0, 00:22:38.746 "dif_type": 0, 00:22:38.746 "dif_is_head_of_md": false, 00:22:38.746 "dif_pi_format": 0 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "bdev_wait_for_examine" 00:22:38.746 } 00:22:38.746 ] 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "subsystem": "nbd", 00:22:38.746 "config": [] 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "subsystem": "scheduler", 00:22:38.746 "config": [ 00:22:38.746 { 00:22:38.746 "method": "framework_set_scheduler", 00:22:38.746 "params": { 00:22:38.746 "name": "static" 00:22:38.746 } 00:22:38.746 } 00:22:38.746 ] 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "subsystem": "nvmf", 00:22:38.746 "config": [ 00:22:38.746 { 00:22:38.746 "method": "nvmf_set_config", 00:22:38.746 "params": { 00:22:38.746 "discovery_filter": "match_any", 00:22:38.746 "admin_cmd_passthru": { 00:22:38.746 "identify_ctrlr": false 00:22:38.746 } 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "nvmf_set_max_subsystems", 00:22:38.746 "params": { 00:22:38.746 "max_subsystems": 1024 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "nvmf_set_crdt", 00:22:38.746 "params": { 00:22:38.746 "crdt1": 0, 00:22:38.746 "crdt2": 0, 00:22:38.746 "crdt3": 0 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "nvmf_create_transport", 00:22:38.746 "params": { 00:22:38.746 "trtype": "TCP", 00:22:38.746 "max_queue_depth": 128, 00:22:38.746 "max_io_qpairs_per_ctrlr": 127, 00:22:38.746 "in_capsule_data_size": 4096, 00:22:38.746 "max_io_size": 131072, 00:22:38.746 "io_unit_size": 131072, 00:22:38.746 "max_aq_depth": 128, 00:22:38.746 "num_shared_buffers": 511, 00:22:38.746 "buf_cache_size": 4294967295, 00:22:38.746 "dif_insert_or_strip": false, 00:22:38.746 "zcopy": false, 00:22:38.746 "c2h_success": false, 00:22:38.746 "sock_priority": 0, 00:22:38.746 "abort_timeout_sec": 1, 00:22:38.746 "ack_timeout": 0, 00:22:38.746 "data_wr_pool_size": 0 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "nvmf_create_subsystem", 00:22:38.746 "params": { 00:22:38.746 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.746 "allow_any_host": false, 00:22:38.746 "serial_number": "00000000000000000000", 00:22:38.746 "model_number": "SPDK bdev Controller", 00:22:38.746 "max_namespaces": 32, 00:22:38.746 "min_cntlid": 1, 00:22:38.746 "max_cntlid": 65519, 00:22:38.746 "ana_reporting": false 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "nvmf_subsystem_add_host", 00:22:38.746 "params": { 00:22:38.746 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.746 "host": "nqn.2016-06.io.spdk:host1", 00:22:38.746 "psk": "key0" 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "nvmf_subsystem_add_ns", 00:22:38.746 "params": { 00:22:38.746 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.746 "namespace": { 00:22:38.746 "nsid": 1, 00:22:38.746 "bdev_name": "malloc0", 00:22:38.746 "nguid": "2DB1D08848F34018B4A7064144783FF9", 00:22:38.746 "uuid": "2db1d088-48f3-4018-b4a7-064144783ff9", 00:22:38.746 "no_auto_visible": false 00:22:38.746 } 00:22:38.746 } 00:22:38.746 }, 00:22:38.746 { 00:22:38.746 "method": "nvmf_subsystem_add_listener", 00:22:38.746 "params": { 00:22:38.746 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.746 "listen_address": { 00:22:38.746 "trtype": "TCP", 00:22:38.746 "adrfam": "IPv4", 00:22:38.746 "traddr": "10.0.0.2", 00:22:38.746 "trsvcid": "4420" 00:22:38.746 }, 00:22:38.746 "secure_channel": false, 00:22:38.746 "sock_impl": "ssl" 00:22:38.746 } 00:22:38.746 } 00:22:38.746 ] 00:22:38.746 } 00:22:38.746 ] 00:22:38.746 }' 00:22:38.746 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:39.005 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:39.005 "subsystems": [ 00:22:39.005 { 00:22:39.005 "subsystem": "keyring", 00:22:39.005 "config": [ 00:22:39.005 { 00:22:39.005 "method": "keyring_file_add_key", 00:22:39.005 "params": { 00:22:39.005 "name": "key0", 00:22:39.005 "path": "/tmp/tmp.dpGKG7c5nw" 00:22:39.005 } 00:22:39.005 } 00:22:39.005 ] 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "subsystem": "iobuf", 00:22:39.005 "config": [ 00:22:39.005 { 00:22:39.005 "method": "iobuf_set_options", 00:22:39.005 "params": { 00:22:39.005 "small_pool_count": 8192, 00:22:39.005 "large_pool_count": 1024, 00:22:39.005 "small_bufsize": 8192, 00:22:39.005 "large_bufsize": 135168 00:22:39.005 } 00:22:39.005 } 00:22:39.005 ] 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "subsystem": "sock", 00:22:39.005 "config": [ 00:22:39.005 { 00:22:39.005 "method": "sock_set_default_impl", 00:22:39.005 "params": { 00:22:39.005 "impl_name": "posix" 00:22:39.005 } 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "method": "sock_impl_set_options", 00:22:39.005 "params": { 00:22:39.005 "impl_name": "ssl", 00:22:39.005 "recv_buf_size": 4096, 00:22:39.005 "send_buf_size": 4096, 00:22:39.005 "enable_recv_pipe": true, 00:22:39.005 "enable_quickack": false, 00:22:39.005 "enable_placement_id": 0, 00:22:39.005 "enable_zerocopy_send_server": true, 00:22:39.005 "enable_zerocopy_send_client": false, 00:22:39.005 "zerocopy_threshold": 0, 00:22:39.005 "tls_version": 0, 00:22:39.005 "enable_ktls": false 00:22:39.005 } 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "method": "sock_impl_set_options", 00:22:39.005 "params": { 00:22:39.005 "impl_name": "posix", 00:22:39.005 "recv_buf_size": 2097152, 00:22:39.005 "send_buf_size": 2097152, 00:22:39.005 "enable_recv_pipe": true, 00:22:39.005 "enable_quickack": false, 00:22:39.005 "enable_placement_id": 0, 00:22:39.005 "enable_zerocopy_send_server": true, 00:22:39.005 "enable_zerocopy_send_client": false, 00:22:39.005 "zerocopy_threshold": 0, 00:22:39.005 "tls_version": 0, 00:22:39.005 "enable_ktls": false 00:22:39.005 } 00:22:39.005 } 00:22:39.005 ] 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "subsystem": "vmd", 00:22:39.005 "config": [] 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "subsystem": "accel", 00:22:39.005 "config": [ 00:22:39.005 { 00:22:39.005 "method": "accel_set_options", 00:22:39.005 "params": { 00:22:39.005 "small_cache_size": 128, 00:22:39.005 "large_cache_size": 16, 00:22:39.005 "task_count": 2048, 00:22:39.005 "sequence_count": 2048, 00:22:39.005 "buf_count": 2048 00:22:39.005 } 00:22:39.005 } 00:22:39.005 ] 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "subsystem": "bdev", 00:22:39.005 "config": [ 00:22:39.005 { 00:22:39.005 "method": "bdev_set_options", 00:22:39.005 "params": { 00:22:39.005 "bdev_io_pool_size": 65535, 00:22:39.005 "bdev_io_cache_size": 256, 00:22:39.005 "bdev_auto_examine": true, 00:22:39.005 "iobuf_small_cache_size": 128, 00:22:39.005 "iobuf_large_cache_size": 16 00:22:39.005 } 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "method": "bdev_raid_set_options", 00:22:39.005 "params": { 00:22:39.005 "process_window_size_kb": 1024, 00:22:39.005 "process_max_bandwidth_mb_sec": 0 00:22:39.005 } 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "method": "bdev_iscsi_set_options", 00:22:39.005 "params": { 00:22:39.005 "timeout_sec": 30 00:22:39.005 } 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "method": "bdev_nvme_set_options", 00:22:39.005 "params": { 00:22:39.005 "action_on_timeout": "none", 00:22:39.005 "timeout_us": 0, 00:22:39.005 "timeout_admin_us": 0, 00:22:39.005 "keep_alive_timeout_ms": 10000, 00:22:39.005 "arbitration_burst": 0, 00:22:39.005 "low_priority_weight": 0, 00:22:39.005 "medium_priority_weight": 0, 00:22:39.005 "high_priority_weight": 0, 00:22:39.005 "nvme_adminq_poll_period_us": 10000, 00:22:39.005 "nvme_ioq_poll_period_us": 0, 00:22:39.005 "io_queue_requests": 512, 00:22:39.005 "delay_cmd_submit": true, 00:22:39.005 "transport_retry_count": 4, 00:22:39.005 "bdev_retry_count": 3, 00:22:39.005 "transport_ack_timeout": 0, 00:22:39.005 "ctrlr_loss_timeout_sec": 0, 00:22:39.005 "reconnect_delay_sec": 0, 00:22:39.005 "fast_io_fail_timeout_sec": 0, 00:22:39.005 "disable_auto_failback": false, 00:22:39.005 "generate_uuids": false, 00:22:39.005 "transport_tos": 0, 00:22:39.005 "nvme_error_stat": false, 00:22:39.005 "rdma_srq_size": 0, 00:22:39.005 "io_path_stat": false, 00:22:39.005 "allow_accel_sequence": false, 00:22:39.005 "rdma_max_cq_size": 0, 00:22:39.005 "rdma_cm_event_timeout_ms": 0, 00:22:39.005 "dhchap_digests": [ 00:22:39.005 "sha256", 00:22:39.005 "sha384", 00:22:39.005 "sha512" 00:22:39.005 ], 00:22:39.005 "dhchap_dhgroups": [ 00:22:39.005 "null", 00:22:39.005 "ffdhe2048", 00:22:39.005 "ffdhe3072", 00:22:39.005 "ffdhe4096", 00:22:39.005 "ffdhe6144", 00:22:39.005 "ffdhe8192" 00:22:39.005 ] 00:22:39.005 } 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "method": "bdev_nvme_attach_controller", 00:22:39.005 "params": { 00:22:39.005 "name": "nvme0", 00:22:39.005 "trtype": "TCP", 00:22:39.005 "adrfam": "IPv4", 00:22:39.005 "traddr": "10.0.0.2", 00:22:39.005 "trsvcid": "4420", 00:22:39.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.005 "prchk_reftag": false, 00:22:39.005 "prchk_guard": false, 00:22:39.005 "ctrlr_loss_timeout_sec": 0, 00:22:39.005 "reconnect_delay_sec": 0, 00:22:39.005 "fast_io_fail_timeout_sec": 0, 00:22:39.005 "psk": "key0", 00:22:39.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.005 "hdgst": false, 00:22:39.005 "ddgst": false 00:22:39.005 } 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "method": "bdev_nvme_set_hotplug", 00:22:39.005 "params": { 00:22:39.005 "period_us": 100000, 00:22:39.005 "enable": false 00:22:39.005 } 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "method": "bdev_enable_histogram", 00:22:39.005 "params": { 00:22:39.005 "name": "nvme0n1", 00:22:39.005 "enable": true 00:22:39.005 } 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "method": "bdev_wait_for_examine" 00:22:39.005 } 00:22:39.005 ] 00:22:39.005 }, 00:22:39.005 { 00:22:39.005 "subsystem": "nbd", 00:22:39.005 "config": [] 00:22:39.005 } 00:22:39.005 ] 00:22:39.005 }' 00:22:39.005 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1506406 00:22:39.005 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1506406 ']' 00:22:39.005 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1506406 00:22:39.005 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:39.005 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:39.005 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1506406 00:22:39.005 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:39.005 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:39.005 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1506406' 00:22:39.005 killing process with pid 1506406 00:22:39.006 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1506406 00:22:39.006 Received shutdown signal, test time was about 1.000000 seconds 00:22:39.006 00:22:39.006 Latency(us) 00:22:39.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.006 =================================================================================================================== 00:22:39.006 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.006 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1506406 00:22:39.263 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1506277 00:22:39.264 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1506277 ']' 00:22:39.264 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1506277 00:22:39.264 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:39.264 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:39.264 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1506277 00:22:39.264 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:39.264 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:39.264 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1506277' 00:22:39.264 killing process with pid 1506277 00:22:39.264 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1506277 00:22:39.264 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1506277 00:22:39.522 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:39.522 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.522 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:39.522 "subsystems": [ 00:22:39.522 { 00:22:39.522 "subsystem": "keyring", 00:22:39.522 "config": [ 00:22:39.522 { 00:22:39.522 "method": "keyring_file_add_key", 00:22:39.522 "params": { 00:22:39.522 "name": "key0", 00:22:39.522 "path": "/tmp/tmp.dpGKG7c5nw" 00:22:39.522 } 00:22:39.522 } 00:22:39.522 ] 00:22:39.522 }, 00:22:39.522 { 00:22:39.522 "subsystem": "iobuf", 00:22:39.522 "config": [ 00:22:39.522 { 00:22:39.522 "method": "iobuf_set_options", 00:22:39.522 "params": { 00:22:39.522 "small_pool_count": 8192, 00:22:39.522 "large_pool_count": 1024, 00:22:39.522 "small_bufsize": 8192, 00:22:39.522 "large_bufsize": 135168 00:22:39.522 } 00:22:39.522 } 00:22:39.522 ] 00:22:39.522 }, 00:22:39.522 { 00:22:39.522 "subsystem": "sock", 00:22:39.522 "config": [ 00:22:39.522 { 00:22:39.522 "method": "sock_set_default_impl", 00:22:39.522 "params": { 00:22:39.522 "impl_name": "posix" 00:22:39.522 } 00:22:39.522 }, 00:22:39.522 { 00:22:39.522 "method": "sock_impl_set_options", 00:22:39.522 "params": { 00:22:39.522 "impl_name": "ssl", 00:22:39.522 "recv_buf_size": 4096, 00:22:39.522 "send_buf_size": 4096, 00:22:39.522 "enable_recv_pipe": true, 00:22:39.522 "enable_quickack": false, 00:22:39.522 "enable_placement_id": 0, 00:22:39.522 "enable_zerocopy_send_server": true, 00:22:39.522 "enable_zerocopy_send_client": false, 00:22:39.522 "zerocopy_threshold": 0, 00:22:39.522 "tls_version": 0, 00:22:39.522 "enable_ktls": false 00:22:39.522 } 00:22:39.522 }, 00:22:39.522 { 00:22:39.522 "method": "sock_impl_set_options", 00:22:39.523 "params": { 00:22:39.523 "impl_name": "posix", 00:22:39.523 "recv_buf_size": 2097152, 00:22:39.523 "send_buf_size": 2097152, 00:22:39.523 "enable_recv_pipe": true, 00:22:39.523 "enable_quickack": false, 00:22:39.523 "enable_placement_id": 0, 00:22:39.523 "enable_zerocopy_send_server": true, 00:22:39.523 "enable_zerocopy_send_client": false, 00:22:39.523 "zerocopy_threshold": 0, 00:22:39.523 "tls_version": 0, 00:22:39.523 "enable_ktls": false 00:22:39.523 } 00:22:39.523 } 00:22:39.523 ] 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "subsystem": "vmd", 00:22:39.523 "config": [] 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "subsystem": "accel", 00:22:39.523 "config": [ 00:22:39.523 { 00:22:39.523 "method": "accel_set_options", 00:22:39.523 "params": { 00:22:39.523 "small_cache_size": 128, 00:22:39.523 "large_cache_size": 16, 00:22:39.523 "task_count": 2048, 00:22:39.523 "sequence_count": 2048, 00:22:39.523 "buf_count": 2048 00:22:39.523 } 00:22:39.523 } 00:22:39.523 ] 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "subsystem": "bdev", 00:22:39.523 "config": [ 00:22:39.523 { 00:22:39.523 "method": "bdev_set_options", 00:22:39.523 "params": { 00:22:39.523 "bdev_io_pool_size": 65535, 00:22:39.523 "bdev_io_cache_size": 256, 00:22:39.523 "bdev_auto_examine": true, 00:22:39.523 "iobuf_small_cache_size": 128, 00:22:39.523 "iobuf_large_cache_size": 16 00:22:39.523 } 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "method": "bdev_raid_set_options", 00:22:39.523 "params": { 00:22:39.523 "process_window_size_kb": 1024, 00:22:39.523 "process_max_bandwidth_mb_sec": 0 00:22:39.523 } 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "method": "bdev_iscsi_set_options", 00:22:39.523 "params": { 00:22:39.523 "timeout_sec": 30 00:22:39.523 } 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "method": "bdev_nvme_set_options", 00:22:39.523 "params": { 00:22:39.523 "action_on_timeout": "none", 00:22:39.523 "timeout_us": 0, 00:22:39.523 "timeout_admin_us": 0, 00:22:39.523 "keep_alive_timeout_ms": 10000, 00:22:39.523 "arbitration_burst": 0, 00:22:39.523 "low_priority_weight": 0, 00:22:39.523 "medium_priority_weight": 0, 00:22:39.523 "high_priority_weight": 0, 00:22:39.523 "nvme_adminq_poll_period_us": 10000, 00:22:39.523 "nvme_ioq_poll_period_us": 0, 00:22:39.523 "io_queue_requests": 0, 00:22:39.523 "delay_cmd_submit": true, 00:22:39.523 "transport_retry_count": 4, 00:22:39.523 "bdev_retry_count": 3, 00:22:39.523 "transport_ack_timeout": 0, 00:22:39.523 "ctrlr_loss_timeout_sec": 0, 00:22:39.523 "reconnect_delay_sec": 0, 00:22:39.523 "fast_io_fail_timeout_sec": 0, 00:22:39.523 "disable_auto_failback": false, 00:22:39.523 "generate_uuids": false, 00:22:39.523 "transport_tos": 0, 00:22:39.523 "nvme_error_stat": false, 00:22:39.523 "rdma_srq_size": 0, 00:22:39.523 "io_path_stat": false, 00:22:39.523 "allow_accel_sequence": false, 00:22:39.523 "rdma_max_cq_size": 0, 00:22:39.523 "rdma_cm_event_timeout_ms": 0, 00:22:39.523 "dhchap_digests": [ 00:22:39.523 "sha256", 00:22:39.523 "sha384", 00:22:39.523 "sha512" 00:22:39.523 ], 00:22:39.523 "dhchap_dhgroups": [ 00:22:39.523 "null", 00:22:39.523 "ffdhe2048", 00:22:39.523 "ffdhe3072", 00:22:39.523 "ffdhe4096", 00:22:39.523 "ffdhe6144", 00:22:39.523 "ffdhe8192" 00:22:39.523 ] 00:22:39.523 } 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "method": "bdev_nvme_set_hotplug", 00:22:39.523 "params": { 00:22:39.523 "period_us": 100000, 00:22:39.523 "enable": false 00:22:39.523 } 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "method": "bdev_malloc_create", 00:22:39.523 "params": { 00:22:39.523 "name": "malloc0", 00:22:39.523 "num_blocks": 8192, 00:22:39.523 "block_size": 4096, 00:22:39.523 "physical_block_size": 4096, 00:22:39.523 "uuid": "2db1d088-48f3-4018-b4a7-064144783ff9", 00:22:39.523 "optimal_io_boundary": 0, 00:22:39.523 "md_size": 0, 00:22:39.523 "dif_type": 0, 00:22:39.523 "dif_is_head_of_md": false, 00:22:39.523 "dif_pi_format": 0 00:22:39.523 } 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "method": "bdev_wait_for_examine" 00:22:39.523 } 00:22:39.523 ] 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "subsystem": "nbd", 00:22:39.523 "config": [] 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "subsystem": "scheduler", 00:22:39.523 "config": [ 00:22:39.523 { 00:22:39.523 "method": "framework_set_scheduler", 00:22:39.523 "params": { 00:22:39.523 "name": "static" 00:22:39.523 } 00:22:39.523 } 00:22:39.523 ] 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "subsystem": "nvmf", 00:22:39.523 "config": [ 00:22:39.523 { 00:22:39.523 "method": "nvmf_set_config", 00:22:39.523 "params": { 00:22:39.523 "discovery_filter": "match_any", 00:22:39.523 "admin_cmd_passthru": { 00:22:39.523 "identify_ctrlr": false 00:22:39.523 } 00:22:39.523 } 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "method": "nvmf_set_max_subsystems", 00:22:39.523 "params": { 00:22:39.523 "max_subsystems": 1024 00:22:39.523 } 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "method": "nvmf_set_crdt", 00:22:39.523 "params": { 00:22:39.523 "crdt1": 0, 00:22:39.523 "crdt2": 0, 00:22:39.523 "crdt3": 0 00:22:39.523 } 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "method": "nvmf_create_transport", 00:22:39.523 "params": { 00:22:39.523 "trtype": "TCP", 00:22:39.523 "max_queue_depth": 128, 00:22:39.523 "max_io_qpairs_per_ctrlr": 127, 00:22:39.523 "in_capsule_data_size": 4096, 00:22:39.523 "max_io_size": 131072, 00:22:39.523 "io_unit_size": 131072, 00:22:39.523 "max_aq_depth": 128, 00:22:39.523 "num_shared_buffers": 511, 00:22:39.523 "buf_cache_size": 4294967295, 00:22:39.523 "dif_insert_or_strip": false, 00:22:39.523 "zcopy": false, 00:22:39.523 "c2h_success": false, 00:22:39.523 "sock_priority": 0, 00:22:39.523 "abort_timeout_sec": 1, 00:22:39.523 "ack_timeout": 0, 00:22:39.523 "data_wr_pool_size": 0 00:22:39.523 } 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "method": "nvmf_create_subsystem", 00:22:39.523 "params": { 00:22:39.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.523 "allow_any_host": false, 00:22:39.523 "serial_number": "00000000000000000000", 00:22:39.523 "model_number": "SPDK bdev Controller", 00:22:39.523 "max_namespaces": 32, 00:22:39.523 "min_cntlid": 1, 00:22:39.523 "max_cntlid": 65519, 00:22:39.523 "ana_reporting": false 00:22:39.523 } 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "method": "nvmf_subsystem_add_host", 00:22:39.523 "params": { 00:22:39.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.523 "host": "nqn.2016-06.io.spdk:host1", 00:22:39.523 "psk": "key0" 00:22:39.523 } 00:22:39.523 }, 00:22:39.523 { 00:22:39.523 "method": "nvmf_subsystem_add_ns", 00:22:39.523 "params": { 00:22:39.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.523 "namespace": { 00:22:39.523 "nsid": 1, 00:22:39.523 "bdev_name": "malloc0", 00:22:39.523 "nguid": "2DB1D08848F34018B4A7064144783FF9", 00:22:39.523 "uuid": "2db1d088-48f3-4018-b4a7-064144783ff9", 00:22:39.523 "no_auto_visible": false 00:22:39.524 } 00:22:39.524 } 00:22:39.524 }, 00:22:39.524 { 00:22:39.524 "method": "nvmf_subsystem_add_listener", 00:22:39.524 "params": { 00:22:39.524 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.524 "listen_address": { 00:22:39.524 "trtype": "TCP", 00:22:39.524 "adrfam": "IPv4", 00:22:39.524 "traddr": "10.0.0.2", 00:22:39.524 "trsvcid": "4420" 00:22:39.524 }, 00:22:39.524 "secure_channel": false, 00:22:39.524 "sock_impl": "ssl" 00:22:39.524 } 00:22:39.524 } 00:22:39.524 ] 00:22:39.524 } 00:22:39.524 ] 00:22:39.524 }' 00:22:39.524 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:39.524 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.524 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1506787 00:22:39.524 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:39.524 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1506787 00:22:39.524 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1506787 ']' 00:22:39.524 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.524 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.524 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.524 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.524 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.524 [2024-07-26 18:23:05.480416] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:39.524 [2024-07-26 18:23:05.480515] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.524 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.524 [2024-07-26 18:23:05.518763] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:39.524 [2024-07-26 18:23:05.545426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.524 [2024-07-26 18:23:05.628261] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.524 [2024-07-26 18:23:05.628317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.524 [2024-07-26 18:23:05.628338] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.524 [2024-07-26 18:23:05.628350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.524 [2024-07-26 18:23:05.628360] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.524 [2024-07-26 18:23:05.628429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.782 [2024-07-26 18:23:05.867629] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.782 [2024-07-26 18:23:05.907891] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:39.782 [2024-07-26 18:23:05.908181] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1506846 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1506846 /var/tmp/bdevperf.sock 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1506846 ']' 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.349 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:40.349 "subsystems": [ 00:22:40.349 { 00:22:40.349 "subsystem": "keyring", 00:22:40.349 "config": [ 00:22:40.349 { 00:22:40.349 "method": "keyring_file_add_key", 00:22:40.349 "params": { 00:22:40.349 "name": "key0", 00:22:40.349 "path": "/tmp/tmp.dpGKG7c5nw" 00:22:40.349 } 00:22:40.349 } 00:22:40.349 ] 00:22:40.349 }, 00:22:40.349 { 00:22:40.349 "subsystem": "iobuf", 00:22:40.349 "config": [ 00:22:40.349 { 00:22:40.349 "method": "iobuf_set_options", 00:22:40.349 "params": { 00:22:40.349 "small_pool_count": 8192, 00:22:40.349 "large_pool_count": 1024, 00:22:40.349 "small_bufsize": 8192, 00:22:40.349 "large_bufsize": 135168 00:22:40.349 } 00:22:40.349 } 00:22:40.349 ] 00:22:40.349 }, 00:22:40.349 { 00:22:40.349 "subsystem": "sock", 00:22:40.349 "config": [ 00:22:40.349 { 00:22:40.349 "method": "sock_set_default_impl", 00:22:40.349 "params": { 00:22:40.349 "impl_name": "posix" 00:22:40.349 } 00:22:40.349 }, 00:22:40.349 { 00:22:40.349 "method": "sock_impl_set_options", 00:22:40.349 "params": { 00:22:40.349 "impl_name": "ssl", 00:22:40.349 "recv_buf_size": 4096, 00:22:40.349 "send_buf_size": 4096, 00:22:40.349 "enable_recv_pipe": true, 00:22:40.349 "enable_quickack": false, 00:22:40.349 "enable_placement_id": 0, 00:22:40.349 "enable_zerocopy_send_server": true, 00:22:40.349 "enable_zerocopy_send_client": false, 00:22:40.349 "zerocopy_threshold": 0, 00:22:40.349 "tls_version": 0, 00:22:40.349 "enable_ktls": false 00:22:40.349 } 00:22:40.349 }, 00:22:40.349 { 00:22:40.349 "method": "sock_impl_set_options", 00:22:40.349 "params": { 00:22:40.349 "impl_name": "posix", 00:22:40.349 "recv_buf_size": 2097152, 00:22:40.349 "send_buf_size": 2097152, 00:22:40.349 "enable_recv_pipe": true, 00:22:40.349 "enable_quickack": false, 00:22:40.349 "enable_placement_id": 0, 00:22:40.349 "enable_zerocopy_send_server": true, 00:22:40.349 "enable_zerocopy_send_client": false, 00:22:40.349 "zerocopy_threshold": 0, 00:22:40.349 "tls_version": 0, 00:22:40.349 "enable_ktls": false 00:22:40.349 } 00:22:40.349 } 00:22:40.349 ] 00:22:40.349 }, 00:22:40.349 { 00:22:40.349 "subsystem": "vmd", 00:22:40.349 "config": [] 00:22:40.349 }, 00:22:40.349 { 00:22:40.349 "subsystem": "accel", 00:22:40.349 "config": [ 00:22:40.349 { 00:22:40.349 "method": "accel_set_options", 00:22:40.349 "params": { 00:22:40.349 "small_cache_size": 128, 00:22:40.349 "large_cache_size": 16, 00:22:40.349 "task_count": 2048, 00:22:40.349 "sequence_count": 2048, 00:22:40.349 "buf_count": 2048 00:22:40.349 } 00:22:40.349 } 00:22:40.349 ] 00:22:40.349 }, 00:22:40.349 { 00:22:40.349 "subsystem": "bdev", 00:22:40.349 "config": [ 00:22:40.349 { 00:22:40.349 "method": "bdev_set_options", 00:22:40.349 "params": { 00:22:40.349 "bdev_io_pool_size": 65535, 00:22:40.349 "bdev_io_cache_size": 256, 00:22:40.349 "bdev_auto_examine": true, 00:22:40.349 "iobuf_small_cache_size": 128, 00:22:40.349 "iobuf_large_cache_size": 16 00:22:40.349 } 00:22:40.349 }, 00:22:40.349 { 00:22:40.349 "method": "bdev_raid_set_options", 00:22:40.349 "params": { 00:22:40.349 "process_window_size_kb": 1024, 00:22:40.349 "process_max_bandwidth_mb_sec": 0 00:22:40.349 } 00:22:40.349 }, 00:22:40.349 { 00:22:40.349 "method": "bdev_iscsi_set_options", 00:22:40.349 "params": { 00:22:40.349 "timeout_sec": 30 00:22:40.350 } 00:22:40.350 }, 00:22:40.350 { 00:22:40.350 "method": "bdev_nvme_set_options", 00:22:40.350 "params": { 00:22:40.350 "action_on_timeout": "none", 00:22:40.350 "timeout_us": 0, 00:22:40.350 "timeout_admin_us": 0, 00:22:40.350 "keep_alive_timeout_ms": 10000, 00:22:40.350 "arbitration_burst": 0, 00:22:40.350 "low_priority_weight": 0, 00:22:40.350 "medium_priority_weight": 0, 00:22:40.350 "high_priority_weight": 0, 00:22:40.350 "nvme_adminq_poll_period_us": 10000, 00:22:40.350 "nvme_ioq_poll_period_us": 0, 00:22:40.350 "io_queue_requests": 512, 00:22:40.350 "delay_cmd_submit": true, 00:22:40.350 "transport_retry_count": 4, 00:22:40.350 "bdev_retry_count": 3, 00:22:40.350 "transport_ack_timeout": 0, 00:22:40.350 "ctrlr_loss_timeout_sec": 0, 00:22:40.350 "reconnect_delay_sec": 0, 00:22:40.350 "fast_io_fail_timeout_sec": 0, 00:22:40.350 "disable_auto_failback": false, 00:22:40.350 "generate_uuids": false, 00:22:40.350 "transport_tos": 0, 00:22:40.350 "nvme_error_stat": false, 00:22:40.350 "rdma_srq_size": 0, 00:22:40.350 "io_path_stat": false, 00:22:40.350 "allow_accel_sequence": false, 00:22:40.350 "rdma_max_cq_size": 0, 00:22:40.350 "rdma_cm_event_timeout_ms": 0, 00:22:40.350 "dhchap_digests": [ 00:22:40.350 "sha256", 00:22:40.350 "sha384", 00:22:40.350 "sha512" 00:22:40.350 ], 00:22:40.350 "dhchap_dhgroups": [ 00:22:40.350 "null", 00:22:40.350 "ffdhe2048", 00:22:40.350 "ffdhe3072", 00:22:40.350 "ffdhe4096", 00:22:40.350 "ffdhe6144", 00:22:40.350 "ffdhe8192" 00:22:40.350 ] 00:22:40.350 } 00:22:40.350 }, 00:22:40.350 { 00:22:40.350 "method": "bdev_nvme_attach_controller", 00:22:40.350 "params": { 00:22:40.350 "name": "nvme0", 00:22:40.350 "trtype": "TCP", 00:22:40.350 "adrfam": "IPv4", 00:22:40.350 "traddr": "10.0.0.2", 00:22:40.350 "trsvcid": "4420", 00:22:40.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.350 "prchk_reftag": false, 00:22:40.350 "prchk_guard": false, 00:22:40.350 "ctrlr_loss_timeout_sec": 0, 00:22:40.350 "reconnect_delay_sec": 0, 00:22:40.350 "fast_io_fail_timeout_sec": 0, 00:22:40.350 "psk": "key0", 00:22:40.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.350 "hdgst": false, 00:22:40.350 "ddgst": false 00:22:40.350 } 00:22:40.350 }, 00:22:40.350 { 00:22:40.350 "method": "bdev_nvme_set_hotplug", 00:22:40.350 "params": { 00:22:40.350 "period_us": 100000, 00:22:40.350 "enable": false 00:22:40.350 } 00:22:40.350 }, 00:22:40.350 { 00:22:40.350 "method": "bdev_enable_histogram", 00:22:40.350 "params": { 00:22:40.350 "name": "nvme0n1", 00:22:40.350 "enable": true 00:22:40.350 } 00:22:40.350 }, 00:22:40.350 { 00:22:40.350 "method": "bdev_wait_for_examine" 00:22:40.350 } 00:22:40.350 ] 00:22:40.350 }, 00:22:40.350 { 00:22:40.350 "subsystem": "nbd", 00:22:40.350 "config": [] 00:22:40.350 } 00:22:40.350 ] 00:22:40.350 }' 00:22:40.350 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.350 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.608 [2024-07-26 18:23:06.500535] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:40.608 [2024-07-26 18:23:06.500618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506846 ] 00:22:40.608 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.608 [2024-07-26 18:23:06.538895] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:40.608 [2024-07-26 18:23:06.567196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.608 [2024-07-26 18:23:06.657789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.866 [2024-07-26 18:23:06.831688] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:41.430 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.430 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:41.430 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:41.430 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:41.688 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.688 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:41.688 Running I/O for 1 seconds... 00:22:43.063 00:22:43.063 Latency(us) 00:22:43.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.063 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:43.063 Verification LBA range: start 0x0 length 0x2000 00:22:43.063 nvme0n1 : 1.05 2172.30 8.49 0.00 0.00 57696.55 10340.12 87769.69 00:22:43.063 =================================================================================================================== 00:22:43.063 Total : 2172.30 8.49 0.00 0.00 57696.55 10340.12 87769.69 00:22:43.063 0 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:43.063 nvmf_trace.0 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1506846 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1506846 ']' 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1506846 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1506846 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1506846' 00:22:43.063 killing process with pid 1506846 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1506846 00:22:43.063 Received shutdown signal, test time was about 1.000000 seconds 00:22:43.063 00:22:43.063 Latency(us) 00:22:43.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.063 =================================================================================================================== 00:22:43.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.063 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1506846 00:22:43.321 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:43.321 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:43.321 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:43.321 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:43.321 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:43.321 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:43.321 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:43.321 rmmod nvme_tcp 00:22:43.321 rmmod nvme_fabrics 00:22:43.321 rmmod nvme_keyring 00:22:43.321 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:43.321 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:43.322 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:43.322 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1506787 ']' 00:22:43.322 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1506787 00:22:43.322 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1506787 ']' 00:22:43.322 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1506787 00:22:43.322 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:43.322 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.322 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1506787 00:22:43.322 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:43.322 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:43.322 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1506787' 00:22:43.322 killing process with pid 1506787 00:22:43.322 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1506787 00:22:43.322 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1506787 00:22:43.581 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:43.581 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:43.581 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:43.581 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.582 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:43.582 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.582 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.582 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.484 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:45.484 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.BN0cGYW6tQ /tmp/tmp.cPoTWpt96E /tmp/tmp.dpGKG7c5nw 00:22:45.484 00:22:45.484 real 1m19.404s 00:22:45.484 user 2m5.852s 00:22:45.484 sys 0m29.068s 00:22:45.484 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:45.484 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.484 ************************************ 00:22:45.484 END TEST nvmf_tls 00:22:45.484 ************************************ 00:22:45.484 18:23:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:45.484 18:23:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:45.484 18:23:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:45.484 18:23:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:45.743 ************************************ 00:22:45.743 START TEST nvmf_fips 00:22:45.743 ************************************ 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:45.743 * Looking for test storage... 00:22:45.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:45.743 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:45.744 Error setting digest 00:22:45.744 0072F32BDA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:45.744 0072F32BDA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.744 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:47.647 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:47.647 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:47.647 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:47.647 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:47.647 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.648 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:47.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:22:47.907 00:22:47.907 --- 10.0.0.2 ping statistics --- 00:22:47.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.907 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:22:47.907 00:22:47.907 --- 10.0.0.1 ping statistics --- 00:22:47.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.907 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:47.907 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:47.908 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:47.908 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:47.908 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1509201 00:22:47.908 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:47.908 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1509201 00:22:47.908 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1509201 ']' 00:22:47.908 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.908 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.908 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.908 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.908 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:47.908 [2024-07-26 18:23:14.010121] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:47.908 [2024-07-26 18:23:14.010216] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.908 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.908 [2024-07-26 18:23:14.049475] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:48.166 [2024-07-26 18:23:14.075840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.166 [2024-07-26 18:23:14.164984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.166 [2024-07-26 18:23:14.165051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.166 [2024-07-26 18:23:14.165075] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.166 [2024-07-26 18:23:14.165088] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.166 [2024-07-26 18:23:14.165099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.166 [2024-07-26 18:23:14.165154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.166 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.166 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:48.166 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:48.166 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:48.166 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:48.166 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.166 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:48.166 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:48.166 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:48.167 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:48.167 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:48.424 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:48.424 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:48.424 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:48.682 [2024-07-26 18:23:14.592539] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.682 [2024-07-26 18:23:14.608536] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:48.682 [2024-07-26 18:23:14.608770] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.682 [2024-07-26 18:23:14.640265] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:48.682 malloc0 00:22:48.682 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.682 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1509349 00:22:48.682 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.682 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1509349 /var/tmp/bdevperf.sock 00:22:48.682 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1509349 ']' 00:22:48.682 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.682 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:48.682 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.682 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:48.682 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:48.682 [2024-07-26 18:23:14.736500] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:48.682 [2024-07-26 18:23:14.736592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509349 ] 00:22:48.682 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.682 [2024-07-26 18:23:14.768373] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:48.682 [2024-07-26 18:23:14.796137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.939 [2024-07-26 18:23:14.882976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.939 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.939 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:48.939 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:49.197 [2024-07-26 18:23:15.223717] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.197 [2024-07-26 18:23:15.223842] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:49.197 TLSTESTn1 00:22:49.197 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:49.455 Running I/O for 10 seconds... 00:22:59.440 00:22:59.440 Latency(us) 00:22:59.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.440 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:59.440 Verification LBA range: start 0x0 length 0x2000 00:22:59.440 TLSTESTn1 : 10.05 2450.19 9.57 0.00 0.00 52106.03 6189.51 81167.55 00:22:59.440 =================================================================================================================== 00:22:59.440 Total : 2450.19 9.57 0.00 0.00 52106.03 6189.51 81167.55 00:22:59.440 0 00:22:59.440 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:59.440 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:59.440 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:59.440 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:59.440 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:59.440 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:59.440 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:59.440 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:59.440 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:59.440 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:59.440 nvmf_trace.0 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1509349 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1509349 ']' 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1509349 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1509349 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1509349' 00:22:59.698 killing process with pid 1509349 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1509349 00:22:59.698 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.698 00:22:59.698 Latency(us) 00:22:59.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.698 =================================================================================================================== 00:22:59.698 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.698 [2024-07-26 18:23:25.619923] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1509349 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:59.698 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:59.698 rmmod nvme_tcp 00:22:59.698 rmmod nvme_fabrics 00:22:59.955 rmmod nvme_keyring 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1509201 ']' 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1509201 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1509201 ']' 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1509201 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1509201 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1509201' 00:22:59.955 killing process with pid 1509201 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1509201 00:22:59.955 [2024-07-26 18:23:25.902329] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:59.955 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1509201 00:23:00.214 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:00.214 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:00.214 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:00.214 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.214 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.214 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.214 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.214 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:02.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:02.117 00:23:02.117 real 0m16.533s 00:23:02.117 user 0m20.089s 00:23:02.117 sys 0m6.691s 00:23:02.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:02.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:02.117 ************************************ 00:23:02.117 END TEST nvmf_fips 00:23:02.117 ************************************ 00:23:02.117 18:23:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:23:02.117 18:23:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:02.117 18:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:02.117 18:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:02.117 18:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:02.117 ************************************ 00:23:02.117 START TEST nvmf_fuzz 00:23:02.117 ************************************ 00:23:02.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:02.375 * Looking for test storage... 00:23:02.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.376 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:04.279 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:04.279 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:04.279 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:04.279 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.279 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:04.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:23:04.279 00:23:04.279 --- 10.0.0.2 ping statistics --- 00:23:04.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.279 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:23:04.280 00:23:04.280 --- 10.0.0.1 ping statistics --- 00:23:04.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.280 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1512471 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1512471 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1512471 ']' 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:04.280 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:04.539 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:04.539 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:23:04.539 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:04.539 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.539 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:04.539 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.539 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:04.539 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.539 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:04.799 Malloc0 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:04.799 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:36.912 Fuzzing completed. Shutting down the fuzz application 00:23:36.912 00:23:36.912 Dumping successful admin opcodes: 00:23:36.912 8, 9, 10, 24, 00:23:36.912 Dumping successful io opcodes: 00:23:36.912 0, 9, 00:23:36.912 NS: 0x200003aeff00 I/O qp, Total commands completed: 448173, total successful commands: 2605, random_seed: 336862080 00:23:36.912 NS: 0x200003aeff00 admin qp, Total commands completed: 56016, total successful commands: 445, random_seed: 503823424 00:23:36.912 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:36.912 Fuzzing completed. Shutting down the fuzz application 00:23:36.912 00:23:36.912 Dumping successful admin opcodes: 00:23:36.912 24, 00:23:36.912 Dumping successful io opcodes: 00:23:36.912 00:23:36.912 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2045134630 00:23:36.912 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2045262138 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.912 rmmod nvme_tcp 00:23:36.912 rmmod nvme_fabrics 00:23:36.912 rmmod nvme_keyring 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1512471 ']' 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1512471 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1512471 ']' 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 1512471 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1512471 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1512471' 00:23:36.912 killing process with pid 1512471 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 1512471 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 1512471 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.912 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.455 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:39.455 00:23:39.455 real 0m36.799s 00:23:39.455 user 0m50.602s 00:23:39.455 sys 0m15.243s 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:39.455 ************************************ 00:23:39.455 END TEST nvmf_fuzz 00:23:39.455 ************************************ 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:39.455 ************************************ 00:23:39.455 START TEST nvmf_multiconnection 00:23:39.455 ************************************ 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:39.455 * Looking for test storage... 00:23:39.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.455 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:41.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:41.363 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.363 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:41.364 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:41.364 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:23:41.364 00:23:41.364 --- 10.0.0.2 ping statistics --- 00:23:41.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.364 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:23:41.364 00:23:41.364 --- 10.0.0.1 ping statistics --- 00:23:41.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.364 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1518304 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1518304 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 1518304 ']' 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.364 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.364 [2024-07-26 18:24:07.267863] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:41.364 [2024-07-26 18:24:07.267931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.364 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.364 [2024-07-26 18:24:07.306132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:41.364 [2024-07-26 18:24:07.336616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:41.364 [2024-07-26 18:24:07.429920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.364 [2024-07-26 18:24:07.429981] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.364 [2024-07-26 18:24:07.430007] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.364 [2024-07-26 18:24:07.430021] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.364 [2024-07-26 18:24:07.430034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.364 [2024-07-26 18:24:07.430111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.364 [2024-07-26 18:24:07.430169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.364 [2024-07-26 18:24:07.430288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.364 [2024-07-26 18:24:07.430291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 [2024-07-26 18:24:07.587362] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 Malloc1 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 [2024-07-26 18:24:07.643184] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 Malloc2 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 Malloc3 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.624 Malloc4 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.624 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.883 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.883 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:41.883 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.883 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.883 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.883 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:41.883 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.883 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.883 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.883 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:41.883 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 Malloc5 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 Malloc6 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 Malloc7 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 Malloc8 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 Malloc9 00:23:41.884 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:41.884 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:41.884 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.884 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:41.884 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.884 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:42.143 Malloc10 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.143 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:42.144 Malloc11 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:42.144 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:42.710 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:42.710 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:42.710 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:42.710 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:42.710 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:45.239 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:45.239 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:45.239 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:23:45.239 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:45.239 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:45.239 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:45.239 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.239 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:45.497 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:45.497 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:45.497 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:45.497 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:45.497 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:47.397 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:47.397 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:47.397 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:23:47.397 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:47.397 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:47.397 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:47.397 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:47.397 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:48.330 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:48.330 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:48.330 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:48.330 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:48.330 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:50.233 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:50.233 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:50.233 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:23:50.233 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:50.233 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:50.233 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:50.233 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.233 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:50.805 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:50.805 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:50.805 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:50.805 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:50.805 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:52.733 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:52.733 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:52.733 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:23:52.733 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:52.733 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:52.733 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:52.733 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:52.733 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:23:53.667 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:53.667 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:53.667 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:53.667 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:53.667 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:55.568 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:55.568 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:55.568 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:23:55.568 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:55.568 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:55.568 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:55.568 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.568 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:23:56.504 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:56.504 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:56.504 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:56.504 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:56.504 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:58.406 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:58.406 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:58.406 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:23:58.406 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:58.406 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:58.406 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:58.406 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.406 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:23:59.338 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:59.338 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:59.338 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:59.338 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:59.338 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:01.240 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:01.240 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:01.240 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:24:01.240 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:01.240 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:01.240 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:01.240 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.240 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:02.177 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:02.177 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:02.177 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:02.177 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:02.177 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:04.077 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:04.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:04.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:24:04.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:04.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:04.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:04.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:04.078 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:05.009 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:05.009 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:05.009 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:05.009 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:05.009 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:06.913 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:06.913 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:06.913 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:24:06.913 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:06.913 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:06.913 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:06.913 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.913 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:07.850 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:07.850 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:07.850 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:07.850 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:07.850 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:10.383 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:10.384 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:10.384 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:24:10.384 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:10.384 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:10.384 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:10.384 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.384 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:10.662 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:10.662 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:10.662 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:10.662 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:10.662 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:13.197 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:13.197 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:13.197 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:24:13.197 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:13.197 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:13.198 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:13.198 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:13.198 [global] 00:24:13.198 thread=1 00:24:13.198 invalidate=1 00:24:13.198 rw=read 00:24:13.198 time_based=1 00:24:13.198 runtime=10 00:24:13.198 ioengine=libaio 00:24:13.198 direct=1 00:24:13.198 bs=262144 00:24:13.198 iodepth=64 00:24:13.198 norandommap=1 00:24:13.198 numjobs=1 00:24:13.198 00:24:13.198 [job0] 00:24:13.198 filename=/dev/nvme0n1 00:24:13.198 [job1] 00:24:13.198 filename=/dev/nvme10n1 00:24:13.198 [job2] 00:24:13.198 filename=/dev/nvme1n1 00:24:13.198 [job3] 00:24:13.198 filename=/dev/nvme2n1 00:24:13.198 [job4] 00:24:13.198 filename=/dev/nvme3n1 00:24:13.198 [job5] 00:24:13.198 filename=/dev/nvme4n1 00:24:13.198 [job6] 00:24:13.198 filename=/dev/nvme5n1 00:24:13.198 [job7] 00:24:13.198 filename=/dev/nvme6n1 00:24:13.198 [job8] 00:24:13.198 filename=/dev/nvme7n1 00:24:13.198 [job9] 00:24:13.198 filename=/dev/nvme8n1 00:24:13.198 [job10] 00:24:13.198 filename=/dev/nvme9n1 00:24:13.198 Could not set queue depth (nvme0n1) 00:24:13.198 Could not set queue depth (nvme10n1) 00:24:13.198 Could not set queue depth (nvme1n1) 00:24:13.198 Could not set queue depth (nvme2n1) 00:24:13.198 Could not set queue depth (nvme3n1) 00:24:13.198 Could not set queue depth (nvme4n1) 00:24:13.198 Could not set queue depth (nvme5n1) 00:24:13.198 Could not set queue depth (nvme6n1) 00:24:13.198 Could not set queue depth (nvme7n1) 00:24:13.198 Could not set queue depth (nvme8n1) 00:24:13.198 Could not set queue depth (nvme9n1) 00:24:13.198 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:13.198 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:13.198 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:13.198 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:13.198 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:13.198 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:13.198 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:13.198 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:13.198 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:13.198 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:13.198 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:13.198 fio-3.35 00:24:13.198 Starting 11 threads 00:24:25.401 00:24:25.401 job0: (groupid=0, jobs=1): err= 0: pid=1523067: Fri Jul 26 18:24:49 2024 00:24:25.401 read: IOPS=583, BW=146MiB/s (153MB/s)(1472MiB/10096msec) 00:24:25.401 slat (usec): min=12, max=96691, avg=1517.73, stdev=5155.76 00:24:25.401 clat (msec): min=4, max=283, avg=108.14, stdev=47.90 00:24:25.401 lat (msec): min=4, max=283, avg=109.66, stdev=48.75 00:24:25.401 clat percentiles (msec): 00:24:25.401 | 1.00th=[ 10], 5.00th=[ 30], 10.00th=[ 46], 20.00th=[ 67], 00:24:25.401 | 30.00th=[ 81], 40.00th=[ 94], 50.00th=[ 106], 60.00th=[ 120], 00:24:25.401 | 70.00th=[ 133], 80.00th=[ 153], 90.00th=[ 169], 95.00th=[ 188], 00:24:25.401 | 99.00th=[ 218], 99.50th=[ 228], 99.90th=[ 239], 99.95th=[ 243], 00:24:25.401 | 99.99th=[ 284] 00:24:25.401 bw ( KiB/s): min=85844, max=249856, per=8.33%, avg=149085.80, stdev=52928.09, samples=20 00:24:25.401 iops : min= 335, max= 976, avg=582.35, stdev=206.77, samples=20 00:24:25.401 lat (msec) : 10=1.12%, 20=2.24%, 50=7.71%, 100=34.78%, 250=54.13% 00:24:25.401 lat (msec) : 500=0.02% 00:24:25.401 cpu : usr=0.33%, sys=2.02%, ctx=1349, majf=0, minf=3721 00:24:25.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:25.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.401 issued rwts: total=5888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.401 job1: (groupid=0, jobs=1): err= 0: pid=1523068: Fri Jul 26 18:24:49 2024 00:24:25.401 read: IOPS=808, BW=202MiB/s (212MB/s)(2025MiB/10015msec) 00:24:25.401 slat (usec): min=9, max=109777, avg=791.64, stdev=3458.97 00:24:25.401 clat (usec): min=1019, max=345109, avg=78299.04, stdev=56034.33 00:24:25.401 lat (usec): min=1055, max=345127, avg=79090.69, stdev=56465.76 00:24:25.401 clat percentiles (msec): 00:24:25.401 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 30], 20.00th=[ 33], 00:24:25.401 | 30.00th=[ 36], 40.00th=[ 45], 50.00th=[ 63], 60.00th=[ 78], 00:24:25.401 | 70.00th=[ 95], 80.00th=[ 134], 90.00th=[ 159], 95.00th=[ 182], 00:24:25.401 | 99.00th=[ 245], 99.50th=[ 296], 99.90th=[ 342], 99.95th=[ 342], 00:24:25.401 | 99.99th=[ 347] 00:24:25.401 bw ( KiB/s): min=56320, max=464896, per=11.49%, avg=205685.25, stdev=107718.48, samples=20 00:24:25.401 iops : min= 220, max= 1816, avg=803.40, stdev=420.74, samples=20 00:24:25.401 lat (msec) : 2=0.04%, 4=0.19%, 10=3.40%, 20=2.53%, 50=37.71% 00:24:25.401 lat (msec) : 100=28.08%, 250=27.20%, 500=0.85% 00:24:25.401 cpu : usr=0.40%, sys=2.22%, ctx=1952, majf=0, minf=4097 00:24:25.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:25.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.401 issued rwts: total=8098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.401 job2: (groupid=0, jobs=1): err= 0: pid=1523069: Fri Jul 26 18:24:49 2024 00:24:25.401 read: IOPS=566, BW=142MiB/s (149MB/s)(1426MiB/10059msec) 00:24:25.401 slat (usec): min=9, max=104166, avg=971.68, stdev=4603.30 00:24:25.401 clat (usec): min=1989, max=281101, avg=111856.74, stdev=51343.22 00:24:25.401 lat (msec): min=2, max=281, avg=112.83, stdev=51.84 00:24:25.401 clat percentiles (msec): 00:24:25.401 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 46], 20.00th=[ 64], 00:24:25.401 | 30.00th=[ 83], 40.00th=[ 102], 50.00th=[ 112], 60.00th=[ 125], 00:24:25.401 | 70.00th=[ 138], 80.00th=[ 153], 90.00th=[ 182], 95.00th=[ 203], 00:24:25.401 | 99.00th=[ 234], 99.50th=[ 239], 99.90th=[ 266], 99.95th=[ 271], 00:24:25.401 | 99.99th=[ 284] 00:24:25.401 bw ( KiB/s): min=84992, max=263168, per=8.06%, avg=144320.85, stdev=41072.63, samples=20 00:24:25.401 iops : min= 332, max= 1028, avg=563.75, stdev=160.44, samples=20 00:24:25.401 lat (msec) : 2=0.02%, 4=0.16%, 10=1.21%, 20=1.47%, 50=9.82% 00:24:25.401 lat (msec) : 100=26.71%, 250=60.49%, 500=0.12% 00:24:25.401 cpu : usr=0.36%, sys=1.56%, ctx=1632, majf=0, minf=4097 00:24:25.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:25.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.401 issued rwts: total=5702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.401 job3: (groupid=0, jobs=1): err= 0: pid=1523072: Fri Jul 26 18:24:49 2024 00:24:25.401 read: IOPS=735, BW=184MiB/s (193MB/s)(1857MiB/10098msec) 00:24:25.401 slat (usec): min=10, max=57301, avg=1024.53, stdev=3451.02 00:24:25.401 clat (msec): min=6, max=239, avg=85.91, stdev=41.62 00:24:25.401 lat (msec): min=6, max=239, avg=86.93, stdev=42.06 00:24:25.401 clat percentiles (msec): 00:24:25.401 | 1.00th=[ 19], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 41], 00:24:25.401 | 30.00th=[ 57], 40.00th=[ 73], 50.00th=[ 85], 60.00th=[ 96], 00:24:25.401 | 70.00th=[ 109], 80.00th=[ 123], 90.00th=[ 142], 95.00th=[ 157], 00:24:25.401 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 234], 99.95th=[ 241], 00:24:25.401 | 99.99th=[ 241] 00:24:25.401 bw ( KiB/s): min=108327, max=423424, per=10.53%, avg=188558.75, stdev=85062.82, samples=20 00:24:25.401 iops : min= 423, max= 1654, avg=736.55, stdev=332.28, samples=20 00:24:25.401 lat (msec) : 10=0.31%, 20=0.89%, 50=24.74%, 100=37.03%, 250=37.03% 00:24:25.401 cpu : usr=0.41%, sys=2.21%, ctx=1674, majf=0, minf=4097 00:24:25.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:25.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.401 issued rwts: total=7429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.401 job4: (groupid=0, jobs=1): err= 0: pid=1523073: Fri Jul 26 18:24:49 2024 00:24:25.401 read: IOPS=522, BW=131MiB/s (137MB/s)(1319MiB/10094msec) 00:24:25.401 slat (usec): min=10, max=159550, avg=1521.94, stdev=5486.53 00:24:25.401 clat (usec): min=1844, max=279079, avg=120850.41, stdev=45638.58 00:24:25.401 lat (usec): min=1917, max=289212, avg=122372.35, stdev=46400.29 00:24:25.401 clat percentiles (msec): 00:24:25.401 | 1.00th=[ 14], 5.00th=[ 47], 10.00th=[ 57], 20.00th=[ 91], 00:24:25.401 | 30.00th=[ 102], 40.00th=[ 111], 50.00th=[ 120], 60.00th=[ 128], 00:24:25.401 | 70.00th=[ 140], 80.00th=[ 153], 90.00th=[ 178], 95.00th=[ 207], 00:24:25.401 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 255], 99.95th=[ 259], 00:24:25.401 | 99.99th=[ 279] 00:24:25.401 bw ( KiB/s): min=74240, max=226816, per=7.45%, avg=133413.45, stdev=39102.30, samples=20 00:24:25.401 iops : min= 290, max= 886, avg=521.10, stdev=152.74, samples=20 00:24:25.401 lat (msec) : 2=0.02%, 4=0.42%, 10=0.23%, 20=0.49%, 50=5.99% 00:24:25.401 lat (msec) : 100=21.08%, 250=71.39%, 500=0.38% 00:24:25.401 cpu : usr=0.48%, sys=1.75%, ctx=1349, majf=0, minf=4097 00:24:25.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:25.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.401 issued rwts: total=5275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.401 job5: (groupid=0, jobs=1): err= 0: pid=1523074: Fri Jul 26 18:24:49 2024 00:24:25.401 read: IOPS=639, BW=160MiB/s (168MB/s)(1615MiB/10093msec) 00:24:25.401 slat (usec): min=9, max=90229, avg=1098.93, stdev=4659.35 00:24:25.401 clat (usec): min=1072, max=319488, avg=98854.50, stdev=58207.83 00:24:25.401 lat (usec): min=1103, max=342692, avg=99953.43, stdev=58825.09 00:24:25.401 clat percentiles (msec): 00:24:25.401 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 21], 20.00th=[ 45], 00:24:25.401 | 30.00th=[ 63], 40.00th=[ 81], 50.00th=[ 99], 60.00th=[ 113], 00:24:25.401 | 70.00th=[ 127], 80.00th=[ 144], 90.00th=[ 180], 95.00th=[ 205], 00:24:25.401 | 99.00th=[ 247], 99.50th=[ 268], 99.90th=[ 296], 99.95th=[ 321], 00:24:25.401 | 99.99th=[ 321] 00:24:25.401 bw ( KiB/s): min=74752, max=327680, per=9.15%, avg=163712.50, stdev=57996.38, samples=20 00:24:25.401 iops : min= 292, max= 1280, avg=639.50, stdev=226.55, samples=20 00:24:25.401 lat (msec) : 2=0.09%, 4=0.67%, 10=3.78%, 20=5.42%, 50=12.93% 00:24:25.401 lat (msec) : 100=27.70%, 250=48.45%, 500=0.96% 00:24:25.401 cpu : usr=0.33%, sys=1.87%, ctx=1594, majf=0, minf=4097 00:24:25.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:25.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.401 issued rwts: total=6458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.401 job6: (groupid=0, jobs=1): err= 0: pid=1523081: Fri Jul 26 18:24:49 2024 00:24:25.401 read: IOPS=675, BW=169MiB/s (177MB/s)(1706MiB/10096msec) 00:24:25.401 slat (usec): min=9, max=82240, avg=1127.38, stdev=3983.31 00:24:25.401 clat (msec): min=5, max=326, avg=93.51, stdev=45.65 00:24:25.401 lat (msec): min=5, max=326, avg=94.64, stdev=46.15 00:24:25.401 clat percentiles (msec): 00:24:25.401 | 1.00th=[ 19], 5.00th=[ 39], 10.00th=[ 49], 20.00th=[ 56], 00:24:25.401 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 96], 00:24:25.401 | 70.00th=[ 114], 80.00th=[ 136], 90.00th=[ 157], 95.00th=[ 174], 00:24:25.401 | 99.00th=[ 222], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 321], 00:24:25.401 | 99.99th=[ 326] 00:24:25.401 bw ( KiB/s): min=98816, max=285184, per=9.67%, avg=173066.20, stdev=59505.03, samples=20 00:24:25.401 iops : min= 386, max= 1114, avg=676.00, stdev=232.43, samples=20 00:24:25.401 lat (msec) : 10=0.31%, 20=1.01%, 50=10.23%, 100=51.27%, 250=36.49% 00:24:25.401 lat (msec) : 500=0.69% 00:24:25.402 cpu : usr=0.48%, sys=2.02%, ctx=1562, majf=0, minf=4097 00:24:25.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:25.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.402 issued rwts: total=6823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.402 job7: (groupid=0, jobs=1): err= 0: pid=1523082: Fri Jul 26 18:24:49 2024 00:24:25.402 read: IOPS=604, BW=151MiB/s (159MB/s)(1520MiB/10057msec) 00:24:25.402 slat (usec): min=9, max=113835, avg=949.67, stdev=4400.25 00:24:25.402 clat (usec): min=1528, max=263501, avg=104826.08, stdev=50728.70 00:24:25.402 lat (usec): min=1548, max=281548, avg=105775.75, stdev=51245.14 00:24:25.402 clat percentiles (msec): 00:24:25.402 | 1.00th=[ 7], 5.00th=[ 12], 10.00th=[ 24], 20.00th=[ 66], 00:24:25.402 | 30.00th=[ 87], 40.00th=[ 99], 50.00th=[ 109], 60.00th=[ 120], 00:24:25.402 | 70.00th=[ 130], 80.00th=[ 142], 90.00th=[ 159], 95.00th=[ 190], 00:24:25.402 | 99.00th=[ 241], 99.50th=[ 245], 99.90th=[ 251], 99.95th=[ 264], 00:24:25.402 | 99.99th=[ 264] 00:24:25.402 bw ( KiB/s): min=95744, max=271360, per=8.60%, avg=154021.55, stdev=41173.49, samples=20 00:24:25.402 iops : min= 374, max= 1060, avg=601.60, stdev=160.86, samples=20 00:24:25.402 lat (msec) : 2=0.03%, 4=0.08%, 10=4.56%, 20=3.67%, 50=7.56% 00:24:25.402 lat (msec) : 100=25.88%, 250=58.07%, 500=0.15% 00:24:25.402 cpu : usr=0.45%, sys=1.69%, ctx=1664, majf=0, minf=4097 00:24:25.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:25.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.402 issued rwts: total=6081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.402 job8: (groupid=0, jobs=1): err= 0: pid=1523084: Fri Jul 26 18:24:49 2024 00:24:25.402 read: IOPS=618, BW=155MiB/s (162MB/s)(1562MiB/10096msec) 00:24:25.402 slat (usec): min=9, max=115299, avg=1491.94, stdev=4406.26 00:24:25.402 clat (msec): min=6, max=250, avg=101.84, stdev=40.75 00:24:25.402 lat (msec): min=6, max=268, avg=103.33, stdev=41.29 00:24:25.402 clat percentiles (msec): 00:24:25.402 | 1.00th=[ 35], 5.00th=[ 51], 10.00th=[ 55], 20.00th=[ 64], 00:24:25.402 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 93], 60.00th=[ 110], 00:24:25.402 | 70.00th=[ 129], 80.00th=[ 144], 90.00th=[ 159], 95.00th=[ 169], 00:24:25.402 | 99.00th=[ 194], 99.50th=[ 203], 99.90th=[ 245], 99.95th=[ 245], 00:24:25.402 | 99.99th=[ 251] 00:24:25.402 bw ( KiB/s): min=99640, max=257024, per=8.85%, avg=158326.00, stdev=55831.25, samples=20 00:24:25.402 iops : min= 389, max= 1004, avg=618.45, stdev=218.10, samples=20 00:24:25.402 lat (msec) : 10=0.02%, 20=0.18%, 50=4.75%, 100=49.99%, 250=45.05% 00:24:25.402 lat (msec) : 500=0.02% 00:24:25.402 cpu : usr=0.42%, sys=1.96%, ctx=1346, majf=0, minf=4097 00:24:25.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:25.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.402 issued rwts: total=6249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.402 job9: (groupid=0, jobs=1): err= 0: pid=1523091: Fri Jul 26 18:24:49 2024 00:24:25.402 read: IOPS=602, BW=151MiB/s (158MB/s)(1521MiB/10097msec) 00:24:25.402 slat (usec): min=9, max=198681, avg=884.83, stdev=4590.08 00:24:25.402 clat (usec): min=995, max=326647, avg=105273.49, stdev=54446.27 00:24:25.402 lat (usec): min=1061, max=326859, avg=106158.31, stdev=54830.36 00:24:25.402 clat percentiles (msec): 00:24:25.402 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 34], 20.00th=[ 54], 00:24:25.402 | 30.00th=[ 77], 40.00th=[ 92], 50.00th=[ 104], 60.00th=[ 120], 00:24:25.402 | 70.00th=[ 134], 80.00th=[ 148], 90.00th=[ 176], 95.00th=[ 194], 00:24:25.402 | 99.00th=[ 226], 99.50th=[ 321], 99.90th=[ 326], 99.95th=[ 326], 00:24:25.402 | 99.99th=[ 326] 00:24:25.402 bw ( KiB/s): min=94019, max=269824, per=8.61%, avg=154076.95, stdev=50023.08, samples=20 00:24:25.402 iops : min= 367, max= 1054, avg=601.85, stdev=195.42, samples=20 00:24:25.402 lat (usec) : 1000=0.02% 00:24:25.402 lat (msec) : 2=0.38%, 4=0.89%, 10=1.35%, 20=2.84%, 50=12.76% 00:24:25.402 lat (msec) : 100=28.87%, 250=52.17%, 500=0.72% 00:24:25.402 cpu : usr=0.31%, sys=1.76%, ctx=1821, majf=0, minf=4097 00:24:25.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:25.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.402 issued rwts: total=6082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.402 job10: (groupid=0, jobs=1): err= 0: pid=1523092: Fri Jul 26 18:24:49 2024 00:24:25.402 read: IOPS=645, BW=161MiB/s (169MB/s)(1630MiB/10096msec) 00:24:25.402 slat (usec): min=9, max=82964, avg=1067.36, stdev=3850.24 00:24:25.402 clat (usec): min=1020, max=217251, avg=97960.68, stdev=45341.32 00:24:25.402 lat (usec): min=1048, max=229123, avg=99028.03, stdev=45808.54 00:24:25.402 clat percentiles (msec): 00:24:25.402 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 29], 20.00th=[ 64], 00:24:25.402 | 30.00th=[ 77], 40.00th=[ 88], 50.00th=[ 99], 60.00th=[ 108], 00:24:25.402 | 70.00th=[ 121], 80.00th=[ 138], 90.00th=[ 161], 95.00th=[ 176], 00:24:25.402 | 99.00th=[ 197], 99.50th=[ 201], 99.90th=[ 213], 99.95th=[ 215], 00:24:25.402 | 99.99th=[ 218] 00:24:25.402 bw ( KiB/s): min=100864, max=260096, per=9.23%, avg=165273.95, stdev=39664.39, samples=20 00:24:25.402 iops : min= 394, max= 1016, avg=645.55, stdev=154.92, samples=20 00:24:25.402 lat (msec) : 2=0.31%, 4=0.20%, 10=1.33%, 20=5.38%, 50=6.86% 00:24:25.402 lat (msec) : 100=38.60%, 250=47.32% 00:24:25.402 cpu : usr=0.35%, sys=1.81%, ctx=1666, majf=0, minf=4097 00:24:25.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:25.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.402 issued rwts: total=6520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.402 00:24:25.402 Run status group 0 (all jobs): 00:24:25.402 READ: bw=1748MiB/s (1833MB/s), 131MiB/s-202MiB/s (137MB/s-212MB/s), io=17.2GiB (18.5GB), run=10015-10098msec 00:24:25.402 00:24:25.402 Disk stats (read/write): 00:24:25.402 nvme0n1: ios=11543/0, merge=0/0, ticks=1230301/0, in_queue=1230301, util=96.92% 00:24:25.402 nvme10n1: ios=15731/0, merge=0/0, ticks=1241329/0, in_queue=1241329, util=97.17% 00:24:25.402 nvme1n1: ios=11217/0, merge=0/0, ticks=1232649/0, in_queue=1232649, util=97.50% 00:24:25.402 nvme2n1: ios=14629/0, merge=0/0, ticks=1232701/0, in_queue=1232701, util=97.68% 00:24:25.402 nvme3n1: ios=10329/0, merge=0/0, ticks=1228989/0, in_queue=1228989, util=97.78% 00:24:25.402 nvme4n1: ios=12702/0, merge=0/0, ticks=1233671/0, in_queue=1233671, util=98.11% 00:24:25.402 nvme5n1: ios=13404/0, merge=0/0, ticks=1232818/0, in_queue=1232818, util=98.27% 00:24:25.402 nvme6n1: ios=11958/0, merge=0/0, ticks=1227684/0, in_queue=1227684, util=98.42% 00:24:25.402 nvme7n1: ios=12271/0, merge=0/0, ticks=1227941/0, in_queue=1227941, util=98.85% 00:24:25.402 nvme8n1: ios=11931/0, merge=0/0, ticks=1237747/0, in_queue=1237747, util=99.05% 00:24:25.402 nvme9n1: ios=12788/0, merge=0/0, ticks=1232006/0, in_queue=1232006, util=99.18% 00:24:25.402 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:25.402 [global] 00:24:25.402 thread=1 00:24:25.402 invalidate=1 00:24:25.402 rw=randwrite 00:24:25.402 time_based=1 00:24:25.402 runtime=10 00:24:25.402 ioengine=libaio 00:24:25.402 direct=1 00:24:25.402 bs=262144 00:24:25.402 iodepth=64 00:24:25.402 norandommap=1 00:24:25.402 numjobs=1 00:24:25.402 00:24:25.402 [job0] 00:24:25.402 filename=/dev/nvme0n1 00:24:25.402 [job1] 00:24:25.402 filename=/dev/nvme10n1 00:24:25.402 [job2] 00:24:25.402 filename=/dev/nvme1n1 00:24:25.402 [job3] 00:24:25.402 filename=/dev/nvme2n1 00:24:25.402 [job4] 00:24:25.402 filename=/dev/nvme3n1 00:24:25.402 [job5] 00:24:25.402 filename=/dev/nvme4n1 00:24:25.402 [job6] 00:24:25.402 filename=/dev/nvme5n1 00:24:25.402 [job7] 00:24:25.402 filename=/dev/nvme6n1 00:24:25.402 [job8] 00:24:25.402 filename=/dev/nvme7n1 00:24:25.402 [job9] 00:24:25.402 filename=/dev/nvme8n1 00:24:25.402 [job10] 00:24:25.402 filename=/dev/nvme9n1 00:24:25.402 Could not set queue depth (nvme0n1) 00:24:25.402 Could not set queue depth (nvme10n1) 00:24:25.402 Could not set queue depth (nvme1n1) 00:24:25.402 Could not set queue depth (nvme2n1) 00:24:25.402 Could not set queue depth (nvme3n1) 00:24:25.402 Could not set queue depth (nvme4n1) 00:24:25.402 Could not set queue depth (nvme5n1) 00:24:25.402 Could not set queue depth (nvme6n1) 00:24:25.402 Could not set queue depth (nvme7n1) 00:24:25.402 Could not set queue depth (nvme8n1) 00:24:25.402 Could not set queue depth (nvme9n1) 00:24:25.402 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.402 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.402 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.402 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.402 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.402 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.402 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.402 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.402 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.402 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.402 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.402 fio-3.35 00:24:25.402 Starting 11 threads 00:24:35.381 00:24:35.381 job0: (groupid=0, jobs=1): err= 0: pid=1524107: Fri Jul 26 18:25:00 2024 00:24:35.381 write: IOPS=471, BW=118MiB/s (123MB/s)(1188MiB/10083msec); 0 zone resets 00:24:35.381 slat (usec): min=22, max=88111, avg=1531.19, stdev=4303.66 00:24:35.381 clat (usec): min=1599, max=360896, avg=134277.19, stdev=76308.53 00:24:35.381 lat (usec): min=1637, max=360936, avg=135808.37, stdev=77297.71 00:24:35.381 clat percentiles (msec): 00:24:35.381 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 34], 20.00th=[ 57], 00:24:35.381 | 30.00th=[ 91], 40.00th=[ 108], 50.00th=[ 140], 60.00th=[ 155], 00:24:35.381 | 70.00th=[ 171], 80.00th=[ 190], 90.00th=[ 243], 95.00th=[ 279], 00:24:35.381 | 99.00th=[ 321], 99.50th=[ 330], 99.90th=[ 342], 99.95th=[ 342], 00:24:35.381 | 99.99th=[ 363] 00:24:35.381 bw ( KiB/s): min=59392, max=256000, per=9.58%, avg=119987.20, stdev=48436.70, samples=20 00:24:35.381 iops : min= 232, max= 1000, avg=468.70, stdev=189.21, samples=20 00:24:35.381 lat (msec) : 2=0.13%, 4=0.25%, 10=1.39%, 20=3.24%, 50=10.95% 00:24:35.381 lat (msec) : 100=18.78%, 250=56.53%, 500=8.74% 00:24:35.381 cpu : usr=1.47%, sys=1.64%, ctx=2597, majf=0, minf=1 00:24:35.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:35.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.381 issued rwts: total=0,4750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.381 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.381 job1: (groupid=0, jobs=1): err= 0: pid=1524119: Fri Jul 26 18:25:00 2024 00:24:35.381 write: IOPS=364, BW=91.1MiB/s (95.5MB/s)(934MiB/10251msec); 0 zone resets 00:24:35.381 slat (usec): min=21, max=1177.3k, avg=1988.66, stdev=22365.33 00:24:35.381 clat (msec): min=3, max=2893, avg=173.44, stdev=327.26 00:24:35.381 lat (msec): min=3, max=2893, avg=175.43, stdev=330.13 00:24:35.381 clat percentiles (msec): 00:24:35.381 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 13], 20.00th=[ 45], 00:24:35.381 | 30.00th=[ 75], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 138], 00:24:35.381 | 70.00th=[ 169], 80.00th=[ 201], 90.00th=[ 234], 95.00th=[ 330], 00:24:35.381 | 99.00th=[ 2140], 99.50th=[ 2232], 99.90th=[ 2299], 99.95th=[ 2903], 00:24:35.381 | 99.99th=[ 2903] 00:24:35.381 bw ( KiB/s): min= 2048, max=214016, per=7.90%, avg=98950.74, stdev=63559.49, samples=19 00:24:35.381 iops : min= 8, max= 836, avg=386.53, stdev=248.28, samples=19 00:24:35.381 lat (msec) : 4=0.11%, 10=6.93%, 20=7.10%, 50=6.75%, 100=32.77% 00:24:35.381 lat (msec) : 250=38.05%, 500=4.66%, 750=0.29%, 1000=0.16%, 2000=1.74% 00:24:35.381 lat (msec) : >=2000=1.45% 00:24:35.381 cpu : usr=1.06%, sys=1.32%, ctx=2086, majf=0, minf=1 00:24:35.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:24:35.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.381 issued rwts: total=0,3735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.381 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.381 job2: (groupid=0, jobs=1): err= 0: pid=1524121: Fri Jul 26 18:25:00 2024 00:24:35.381 write: IOPS=519, BW=130MiB/s (136MB/s)(1305MiB/10052msec); 0 zone resets 00:24:35.381 slat (usec): min=17, max=70849, avg=1349.55, stdev=4187.01 00:24:35.381 clat (usec): min=1728, max=580422, avg=121847.91, stdev=73561.03 00:24:35.381 lat (usec): min=1770, max=580462, avg=123197.47, stdev=74570.11 00:24:35.381 clat percentiles (msec): 00:24:35.381 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 34], 20.00th=[ 62], 00:24:35.381 | 30.00th=[ 77], 40.00th=[ 93], 50.00th=[ 120], 60.00th=[ 144], 00:24:35.381 | 70.00th=[ 161], 80.00th=[ 176], 90.00th=[ 197], 95.00th=[ 218], 00:24:35.381 | 99.00th=[ 414], 99.50th=[ 514], 99.90th=[ 575], 99.95th=[ 584], 00:24:35.381 | 99.99th=[ 584] 00:24:35.381 bw ( KiB/s): min=26624, max=197632, per=10.54%, avg=132028.35, stdev=46165.30, samples=20 00:24:35.381 iops : min= 104, max= 772, avg=515.70, stdev=180.37, samples=20 00:24:35.381 lat (msec) : 2=0.02%, 4=0.25%, 10=1.88%, 20=3.26%, 50=9.96% 00:24:35.381 lat (msec) : 100=27.18%, 250=55.48%, 500=1.38%, 750=0.59% 00:24:35.381 cpu : usr=1.67%, sys=1.70%, ctx=3025, majf=0, minf=1 00:24:35.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:35.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.381 issued rwts: total=0,5220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.381 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.381 job3: (groupid=0, jobs=1): err= 0: pid=1524122: Fri Jul 26 18:25:00 2024 00:24:35.381 write: IOPS=775, BW=194MiB/s (203MB/s)(1957MiB/10087msec); 0 zone resets 00:24:35.381 slat (usec): min=18, max=61254, avg=1271.44, stdev=2648.76 00:24:35.381 clat (msec): min=19, max=197, avg=81.17, stdev=43.19 00:24:35.381 lat (msec): min=19, max=197, avg=82.44, stdev=43.77 00:24:35.381 clat percentiles (msec): 00:24:35.381 | 1.00th=[ 38], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 43], 00:24:35.381 | 30.00th=[ 45], 40.00th=[ 51], 50.00th=[ 66], 60.00th=[ 75], 00:24:35.381 | 70.00th=[ 112], 80.00th=[ 129], 90.00th=[ 146], 95.00th=[ 163], 00:24:35.381 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 197], 99.95th=[ 199], 00:24:35.381 | 99.99th=[ 199] 00:24:35.381 bw ( KiB/s): min=94208, max=388608, per=15.87%, avg=198758.40, stdev=97320.64, samples=20 00:24:35.381 iops : min= 368, max= 1518, avg=776.40, stdev=380.16, samples=20 00:24:35.381 lat (msec) : 20=0.05%, 50=39.81%, 100=26.61%, 250=33.52% 00:24:35.381 cpu : usr=2.39%, sys=2.12%, ctx=1972, majf=0, minf=1 00:24:35.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:35.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.381 issued rwts: total=0,7827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.381 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.381 job4: (groupid=0, jobs=1): err= 0: pid=1524123: Fri Jul 26 18:25:00 2024 00:24:35.381 write: IOPS=546, BW=137MiB/s (143MB/s)(1402MiB/10258msec); 0 zone resets 00:24:35.381 slat (usec): min=20, max=90600, avg=1090.59, stdev=3399.30 00:24:35.381 clat (usec): min=1554, max=1606.6k, avg=115899.93, stdev=129373.49 00:24:35.381 lat (usec): min=1598, max=1614.0k, avg=116990.52, stdev=129970.24 00:24:35.381 clat percentiles (msec): 00:24:35.381 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 31], 20.00th=[ 59], 00:24:35.381 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 96], 60.00th=[ 120], 00:24:35.381 | 70.00th=[ 140], 80.00th=[ 153], 90.00th=[ 178], 95.00th=[ 201], 00:24:35.381 | 99.00th=[ 592], 99.50th=[ 1385], 99.90th=[ 1603], 99.95th=[ 1603], 00:24:35.381 | 99.99th=[ 1603] 00:24:35.381 bw ( KiB/s): min=61952, max=262144, per=11.33%, avg=141865.55, stdev=53479.28, samples=20 00:24:35.381 iops : min= 242, max= 1024, avg=554.15, stdev=208.91, samples=20 00:24:35.381 lat (msec) : 2=0.09%, 4=0.39%, 10=1.87%, 20=3.59%, 50=10.77% 00:24:35.381 lat (msec) : 100=34.89%, 250=45.77%, 500=1.27%, 750=0.78%, 1000=0.02% 00:24:35.381 lat (msec) : 2000=0.55% 00:24:35.381 cpu : usr=1.59%, sys=1.82%, ctx=3404, majf=0, minf=1 00:24:35.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:35.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.381 issued rwts: total=0,5606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.381 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.381 job5: (groupid=0, jobs=1): err= 0: pid=1524124: Fri Jul 26 18:25:00 2024 00:24:35.381 write: IOPS=316, BW=79.0MiB/s (82.9MB/s)(812MiB/10266msec); 0 zone resets 00:24:35.381 slat (usec): min=17, max=1409.1k, avg=2395.55, stdev=27870.26 00:24:35.381 clat (usec): min=1877, max=2155.8k, avg=199919.56, stdev=323712.23 00:24:35.381 lat (usec): min=1924, max=2368.4k, avg=202315.11, stdev=327321.00 00:24:35.381 clat percentiles (msec): 00:24:35.381 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 50], 20.00th=[ 96], 00:24:35.381 | 30.00th=[ 116], 40.00th=[ 126], 50.00th=[ 138], 60.00th=[ 155], 00:24:35.381 | 70.00th=[ 176], 80.00th=[ 190], 90.00th=[ 215], 95.00th=[ 251], 00:24:35.381 | 99.00th=[ 1972], 99.50th=[ 2089], 99.90th=[ 2140], 99.95th=[ 2165], 00:24:35.381 | 99.99th=[ 2165] 00:24:35.381 bw ( KiB/s): min= 2048, max=145920, per=7.23%, avg=90510.22, stdev=49576.69, samples=18 00:24:35.381 iops : min= 8, max= 570, avg=353.56, stdev=193.66, samples=18 00:24:35.381 lat (msec) : 2=0.06%, 4=0.18%, 10=1.69%, 20=3.05%, 50=5.11% 00:24:35.381 lat (msec) : 100=10.69%, 250=74.21%, 500=0.74%, 750=0.12%, 1000=0.25% 00:24:35.381 lat (msec) : 2000=2.93%, >=2000=0.96% 00:24:35.381 cpu : usr=1.10%, sys=0.94%, ctx=1819, majf=0, minf=1 00:24:35.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:24:35.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.382 issued rwts: total=0,3246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.382 job6: (groupid=0, jobs=1): err= 0: pid=1524125: Fri Jul 26 18:25:00 2024 00:24:35.382 write: IOPS=288, BW=72.1MiB/s (75.6MB/s)(739MiB/10250msec); 0 zone resets 00:24:35.382 slat (usec): min=23, max=1033.7k, avg=2808.60, stdev=22467.16 00:24:35.382 clat (msec): min=4, max=2129, avg=218.89, stdev=311.51 00:24:35.382 lat (msec): min=6, max=2129, avg=221.70, stdev=315.30 00:24:35.382 clat percentiles (msec): 00:24:35.382 | 1.00th=[ 16], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 85], 00:24:35.382 | 30.00th=[ 125], 40.00th=[ 138], 50.00th=[ 163], 60.00th=[ 176], 00:24:35.382 | 70.00th=[ 186], 80.00th=[ 207], 90.00th=[ 253], 95.00th=[ 877], 00:24:35.382 | 99.00th=[ 1620], 99.50th=[ 1888], 99.90th=[ 2123], 99.95th=[ 2123], 00:24:35.382 | 99.99th=[ 2123] 00:24:35.382 bw ( KiB/s): min= 4608, max=173568, per=6.23%, avg=77985.68, stdev=54984.35, samples=19 00:24:35.382 iops : min= 18, max= 678, avg=304.63, stdev=214.78, samples=19 00:24:35.382 lat (msec) : 10=0.24%, 20=2.54%, 50=8.49%, 100=12.78%, 250=65.44% 00:24:35.382 lat (msec) : 500=4.73%, 750=0.37%, 1000=0.88%, 2000=4.06%, >=2000=0.47% 00:24:35.382 cpu : usr=0.97%, sys=0.87%, ctx=1632, majf=0, minf=1 00:24:35.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:24:35.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.382 issued rwts: total=0,2957,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.382 job7: (groupid=0, jobs=1): err= 0: pid=1524126: Fri Jul 26 18:25:00 2024 00:24:35.382 write: IOPS=309, BW=77.3MiB/s (81.1MB/s)(793MiB/10262msec); 0 zone resets 00:24:35.382 slat (usec): min=16, max=706556, avg=2248.09, stdev=16184.98 00:24:35.382 clat (usec): min=1581, max=2494.6k, avg=204607.86, stdev=336987.84 00:24:35.382 lat (usec): min=1623, max=2494.6k, avg=206855.95, stdev=339547.26 00:24:35.382 clat percentiles (msec): 00:24:35.382 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 69], 00:24:35.382 | 30.00th=[ 106], 40.00th=[ 123], 50.00th=[ 140], 60.00th=[ 161], 00:24:35.382 | 70.00th=[ 188], 80.00th=[ 226], 90.00th=[ 262], 95.00th=[ 296], 00:24:35.382 | 99.00th=[ 2039], 99.50th=[ 2123], 99.90th=[ 2400], 99.95th=[ 2500], 00:24:35.382 | 99.99th=[ 2500] 00:24:35.382 bw ( KiB/s): min= 8192, max=159744, per=6.69%, avg=83779.37, stdev=50949.99, samples=19 00:24:35.382 iops : min= 32, max= 624, avg=327.26, stdev=199.02, samples=19 00:24:35.382 lat (msec) : 2=0.22%, 4=1.07%, 10=2.30%, 20=6.56%, 50=6.40% 00:24:35.382 lat (msec) : 100=12.01%, 250=59.25%, 500=7.97%, 750=0.06%, 1000=0.16% 00:24:35.382 lat (msec) : 2000=2.87%, >=2000=1.13% 00:24:35.382 cpu : usr=1.03%, sys=1.01%, ctx=1869, majf=0, minf=1 00:24:35.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:24:35.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.382 issued rwts: total=0,3173,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.382 job8: (groupid=0, jobs=1): err= 0: pid=1524127: Fri Jul 26 18:25:00 2024 00:24:35.382 write: IOPS=317, BW=79.4MiB/s (83.3MB/s)(815MiB/10262msec); 0 zone resets 00:24:35.382 slat (usec): min=26, max=1033.5k, avg=2808.05, stdev=21595.07 00:24:35.382 clat (msec): min=2, max=2095, avg=198.46, stdev=303.32 00:24:35.382 lat (msec): min=2, max=2096, avg=201.27, stdev=307.03 00:24:35.382 clat percentiles (msec): 00:24:35.382 | 1.00th=[ 10], 5.00th=[ 39], 10.00th=[ 69], 20.00th=[ 100], 00:24:35.382 | 30.00th=[ 115], 40.00th=[ 129], 50.00th=[ 140], 60.00th=[ 146], 00:24:35.382 | 70.00th=[ 159], 80.00th=[ 182], 90.00th=[ 205], 95.00th=[ 355], 00:24:35.382 | 99.00th=[ 1770], 99.50th=[ 1905], 99.90th=[ 2039], 99.95th=[ 2089], 00:24:35.382 | 99.99th=[ 2089] 00:24:35.382 bw ( KiB/s): min= 4096, max=198144, per=6.88%, avg=86160.63, stdev=58841.32, samples=19 00:24:35.382 iops : min= 16, max= 774, avg=336.53, stdev=229.84, samples=19 00:24:35.382 lat (msec) : 4=0.09%, 10=0.98%, 20=1.66%, 50=3.65%, 100=14.08% 00:24:35.382 lat (msec) : 250=73.44%, 500=1.10%, 750=0.25%, 1000=0.52%, 2000=4.05% 00:24:35.382 lat (msec) : >=2000=0.18% 00:24:35.382 cpu : usr=1.03%, sys=1.10%, ctx=1346, majf=0, minf=1 00:24:35.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:24:35.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.382 issued rwts: total=0,3261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.382 job9: (groupid=0, jobs=1): err= 0: pid=1524131: Fri Jul 26 18:25:00 2024 00:24:35.382 write: IOPS=383, BW=95.9MiB/s (101MB/s)(984MiB/10252msec); 0 zone resets 00:24:35.382 slat (usec): min=15, max=377994, avg=1535.04, stdev=9531.23 00:24:35.382 clat (msec): min=2, max=2197, avg=165.15, stdev=290.88 00:24:35.382 lat (msec): min=2, max=2198, avg=166.69, stdev=293.00 00:24:35.382 clat percentiles (msec): 00:24:35.382 | 1.00th=[ 7], 5.00th=[ 14], 10.00th=[ 22], 20.00th=[ 34], 00:24:35.382 | 30.00th=[ 63], 40.00th=[ 102], 50.00th=[ 123], 60.00th=[ 134], 00:24:35.382 | 70.00th=[ 146], 80.00th=[ 178], 90.00th=[ 236], 95.00th=[ 275], 00:24:35.382 | 99.00th=[ 1653], 99.50th=[ 2123], 99.90th=[ 2198], 99.95th=[ 2198], 00:24:35.382 | 99.99th=[ 2198] 00:24:35.382 bw ( KiB/s): min= 4096, max=209408, per=7.91%, avg=99097.60, stdev=55595.72, samples=20 00:24:35.382 iops : min= 16, max= 818, avg=387.10, stdev=217.17, samples=20 00:24:35.382 lat (msec) : 4=0.43%, 10=2.49%, 20=6.38%, 50=17.51%, 100=12.76% 00:24:35.382 lat (msec) : 250=53.15%, 500=3.66%, 750=0.05%, 1000=0.15%, 2000=2.72% 00:24:35.382 lat (msec) : >=2000=0.69% 00:24:35.382 cpu : usr=1.23%, sys=1.34%, ctx=2586, majf=0, minf=1 00:24:35.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:35.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.382 issued rwts: total=0,3934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.382 job10: (groupid=0, jobs=1): err= 0: pid=1524141: Fri Jul 26 18:25:00 2024 00:24:35.382 write: IOPS=647, BW=162MiB/s (170MB/s)(1629MiB/10067msec); 0 zone resets 00:24:35.382 slat (usec): min=19, max=131476, avg=1245.76, stdev=3687.90 00:24:35.382 clat (msec): min=2, max=468, avg=97.59, stdev=77.69 00:24:35.382 lat (msec): min=2, max=468, avg=98.84, stdev=78.72 00:24:35.382 clat percentiles (msec): 00:24:35.382 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 38], 20.00th=[ 43], 00:24:35.382 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 57], 60.00th=[ 90], 00:24:35.382 | 70.00th=[ 118], 80.00th=[ 167], 90.00th=[ 222], 95.00th=[ 247], 00:24:35.382 | 99.00th=[ 317], 99.50th=[ 418], 99.90th=[ 460], 99.95th=[ 460], 00:24:35.382 | 99.99th=[ 468] 00:24:35.382 bw ( KiB/s): min=50176, max=376320, per=13.19%, avg=165196.80, stdev=107523.50, samples=20 00:24:35.382 iops : min= 196, max= 1470, avg=645.30, stdev=420.01, samples=20 00:24:35.382 lat (msec) : 4=0.26%, 10=1.60%, 20=3.36%, 50=38.55%, 100=19.67% 00:24:35.382 lat (msec) : 250=32.07%, 500=4.48% 00:24:35.382 cpu : usr=1.95%, sys=1.94%, ctx=2791, majf=0, minf=1 00:24:35.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:35.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.382 issued rwts: total=0,6516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.382 00:24:35.382 Run status group 0 (all jobs): 00:24:35.382 WRITE: bw=1223MiB/s (1283MB/s), 72.1MiB/s-194MiB/s (75.6MB/s-203MB/s), io=12.3GiB (13.2GB), run=10052-10266msec 00:24:35.382 00:24:35.382 Disk stats (read/write): 00:24:35.382 nvme0n1: ios=49/9218, merge=0/0, ticks=59/1219338, in_queue=1219397, util=97.09% 00:24:35.382 nvme10n1: ios=46/7418, merge=0/0, ticks=838/1170402, in_queue=1171240, util=100.00% 00:24:35.382 nvme1n1: ios=47/10090, merge=0/0, ticks=327/1215506, in_queue=1215833, util=99.97% 00:24:35.382 nvme2n1: ios=0/15378, merge=0/0, ticks=0/1205754, in_queue=1205754, util=97.49% 00:24:35.382 nvme3n1: ios=28/11122, merge=0/0, ticks=1135/1223097, in_queue=1224232, util=99.94% 00:24:35.382 nvme4n1: ios=0/6421, merge=0/0, ticks=0/1167116, in_queue=1167116, util=97.96% 00:24:35.382 nvme5n1: ios=0/5859, merge=0/0, ticks=0/1200434, in_queue=1200434, util=98.18% 00:24:35.382 nvme6n1: ios=0/6283, merge=0/0, ticks=0/1176872, in_queue=1176872, util=98.41% 00:24:35.382 nvme7n1: ios=0/6457, merge=0/0, ticks=0/1179287, in_queue=1179287, util=98.82% 00:24:35.382 nvme8n1: ios=25/7815, merge=0/0, ticks=155/1220305, in_queue=1220460, util=100.00% 00:24:35.382 nvme9n1: ios=37/12646, merge=0/0, ticks=132/1208079, in_queue=1208211, util=100.00% 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:35.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.382 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:35.382 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:35.382 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:35.382 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:35.383 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.383 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:35.641 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:35.641 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:35.641 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:35.641 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:35.641 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:35.641 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:35.641 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:35.642 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.642 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:35.901 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:35.901 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:35.901 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:35.901 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:35.901 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:35.901 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:35.901 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:35.901 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:35.901 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:35.901 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.901 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:35.901 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.901 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.901 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:36.161 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:36.161 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:36.161 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:36.161 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:36.161 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:36.161 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:36.161 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:36.161 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:36.161 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:36.161 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.161 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.161 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.161 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.161 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:36.420 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:36.420 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:36.420 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.420 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:36.677 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:36.677 rmmod nvme_tcp 00:24:36.677 rmmod nvme_fabrics 00:24:36.677 rmmod nvme_keyring 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1518304 ']' 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1518304 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 1518304 ']' 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 1518304 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1518304 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1518304' 00:24:36.677 killing process with pid 1518304 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 1518304 00:24:36.677 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 1518304 00:24:37.241 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:37.241 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:37.241 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:37.241 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:37.241 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:37.241 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.241 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.241 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.148 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:39.148 00:24:39.148 real 1m0.225s 00:24:39.148 user 3m16.029s 00:24:39.148 sys 0m24.281s 00:24:39.148 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:39.148 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:39.148 ************************************ 00:24:39.148 END TEST nvmf_multiconnection 00:24:39.148 ************************************ 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:39.407 ************************************ 00:24:39.407 START TEST nvmf_initiator_timeout 00:24:39.407 ************************************ 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:39.407 * Looking for test storage... 00:24:39.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:39.407 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:41.310 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:41.311 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:41.311 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:41.311 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:41.311 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:41.311 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:41.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:24:41.569 00:24:41.569 --- 10.0.0.2 ping statistics --- 00:24:41.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.569 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:24:41.569 00:24:41.569 --- 10.0.0.1 ping statistics --- 00:24:41.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.569 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1527295 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1527295 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 1527295 ']' 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.569 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:41.570 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.570 [2024-07-26 18:25:07.588387] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:24:41.570 [2024-07-26 18:25:07.588471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.570 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.570 [2024-07-26 18:25:07.625352] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:41.570 [2024-07-26 18:25:07.657775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:41.828 [2024-07-26 18:25:07.748822] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.828 [2024-07-26 18:25:07.748885] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.828 [2024-07-26 18:25:07.748910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.828 [2024-07-26 18:25:07.748924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.828 [2024-07-26 18:25:07.748936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.828 [2024-07-26 18:25:07.749222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.828 [2024-07-26 18:25:07.749304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:41.828 [2024-07-26 18:25:07.749306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.828 [2024-07-26 18:25:07.749246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.828 Malloc0 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.828 Delay0 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.828 [2024-07-26 18:25:07.945472] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.828 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:42.087 [2024-07-26 18:25:07.973744] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.087 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.087 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:42.690 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:42.690 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:24:42.690 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:42.690 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:42.690 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:24:44.591 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:44.591 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:44.591 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:44.591 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:44.591 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:44.591 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:24:44.591 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1527611 00:24:44.591 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:44.591 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:44.591 [global] 00:24:44.591 thread=1 00:24:44.591 invalidate=1 00:24:44.591 rw=write 00:24:44.591 time_based=1 00:24:44.591 runtime=60 00:24:44.591 ioengine=libaio 00:24:44.591 direct=1 00:24:44.591 bs=4096 00:24:44.591 iodepth=1 00:24:44.591 norandommap=0 00:24:44.591 numjobs=1 00:24:44.591 00:24:44.591 verify_dump=1 00:24:44.591 verify_backlog=512 00:24:44.591 verify_state_save=0 00:24:44.591 do_verify=1 00:24:44.591 verify=crc32c-intel 00:24:44.591 [job0] 00:24:44.591 filename=/dev/nvme0n1 00:24:44.591 Could not set queue depth (nvme0n1) 00:24:44.849 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:44.849 fio-3.35 00:24:44.849 Starting 1 thread 00:24:48.133 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:48.133 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.133 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:48.133 true 00:24:48.133 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.133 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:48.133 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.133 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:48.133 true 00:24:48.134 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.134 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:48.134 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.134 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:48.134 true 00:24:48.134 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.134 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:48.134 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.134 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:48.134 true 00:24:48.134 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.134 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:50.671 true 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:50.671 true 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:50.671 true 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:50.671 true 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:50.671 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1527611 00:25:46.924 00:25:46.924 job0: (groupid=0, jobs=1): err= 0: pid=1527801: Fri Jul 26 18:26:10 2024 00:25:46.924 read: IOPS=43, BW=175KiB/s (180kB/s)(10.3MiB/60024msec) 00:25:46.924 slat (usec): min=5, max=8890, avg=18.28, stdev=173.21 00:25:46.924 clat (usec): min=304, max=41282k, avg=22452.08, stdev=804837.52 00:25:46.924 lat (usec): min=311, max=41282k, avg=22470.36, stdev=804837.71 00:25:46.924 clat percentiles (usec): 00:25:46.924 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 326], 00:25:46.924 | 20.00th=[ 343], 30.00th=[ 359], 40.00th=[ 371], 00:25:46.925 | 50.00th=[ 420], 60.00th=[ 449], 70.00th=[ 474], 00:25:46.925 | 80.00th=[ 545], 90.00th=[ 41157], 95.00th=[ 42206], 00:25:46.925 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:25:46.925 | 99.95th=[ 45876], 99.99th=[17112761] 00:25:46.925 write: IOPS=51, BW=205KiB/s (210kB/s)(12.0MiB/60024msec); 0 zone resets 00:25:46.925 slat (nsec): min=6876, max=84878, avg=15316.28, stdev=9519.96 00:25:46.925 clat (usec): min=210, max=476, avg=270.75, stdev=52.66 00:25:46.925 lat (usec): min=219, max=506, avg=286.06, stdev=59.01 00:25:46.925 clat percentiles (usec): 00:25:46.925 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:25:46.925 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 265], 00:25:46.925 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 351], 95.00th=[ 388], 00:25:46.925 | 99.00th=[ 441], 99.50th=[ 445], 99.90th=[ 453], 99.95th=[ 469], 00:25:46.925 | 99.99th=[ 478] 00:25:46.925 bw ( KiB/s): min= 1824, max= 6368, per=100.00%, avg=4096.00, stdev=1664.40, samples=6 00:25:46.925 iops : min= 456, max= 1592, avg=1024.00, stdev=416.10, samples=6 00:25:46.925 lat (usec) : 250=28.28%, 500=60.35%, 750=4.19% 00:25:46.925 lat (msec) : 10=0.02%, 50=7.14%, >=2000=0.02% 00:25:46.925 cpu : usr=0.09%, sys=0.19%, ctx=5704, majf=0, minf=2 00:25:46.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:46.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.925 issued rwts: total=2631,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:46.925 00:25:46.925 Run status group 0 (all jobs): 00:25:46.925 READ: bw=175KiB/s (180kB/s), 175KiB/s-175KiB/s (180kB/s-180kB/s), io=10.3MiB (10.8MB), run=60024-60024msec 00:25:46.925 WRITE: bw=205KiB/s (210kB/s), 205KiB/s-205KiB/s (210kB/s-210kB/s), io=12.0MiB (12.6MB), run=60024-60024msec 00:25:46.925 00:25:46.925 Disk stats (read/write): 00:25:46.925 nvme0n1: ios=2727/3072, merge=0/0, ticks=17879/791, in_queue=18670, util=99.74% 00:25:46.925 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:46.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:46.925 nvmf hotplug test: fio successful as expected 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:46.925 rmmod nvme_tcp 00:25:46.925 rmmod nvme_fabrics 00:25:46.925 rmmod nvme_keyring 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1527295 ']' 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1527295 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 1527295 ']' 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 1527295 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1527295 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1527295' 00:25:46.925 killing process with pid 1527295 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 1527295 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 1527295 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.925 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.493 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:47.493 00:25:47.493 real 1m8.105s 00:25:47.493 user 4m10.606s 00:25:47.493 sys 0m6.546s 00:25:47.494 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:47.494 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.494 ************************************ 00:25:47.494 END TEST nvmf_initiator_timeout 00:25:47.494 ************************************ 00:25:47.494 18:26:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:25:47.494 18:26:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:25:47.494 18:26:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:25:47.494 18:26:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:25:47.494 18:26:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:49.394 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.394 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:25:49.394 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:49.394 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:49.394 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:49.394 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:49.395 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:49.395 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:49.395 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:49.395 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:49.395 ************************************ 00:25:49.395 START TEST nvmf_perf_adq 00:25:49.395 ************************************ 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:49.395 * Looking for test storage... 00:25:49.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.395 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:49.396 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:51.943 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:51.943 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:51.943 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:51.943 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.943 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:51.944 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.944 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:51.944 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:51.944 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:25:51.944 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:25:52.213 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:25:54.116 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:59.396 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:59.396 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.396 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:59.397 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:59.397 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:59.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:25:59.397 00:25:59.397 --- 10.0.0.2 ping statistics --- 00:25:59.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.397 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:25:59.397 00:25:59.397 --- 10.0.0.1 ping statistics --- 00:25:59.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.397 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1539306 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1539306 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1539306 ']' 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.397 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.397 [2024-07-26 18:26:25.292809] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:25:59.397 [2024-07-26 18:26:25.292896] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.397 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.397 [2024-07-26 18:26:25.330488] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:59.397 [2024-07-26 18:26:25.363246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:59.397 [2024-07-26 18:26:25.453826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.397 [2024-07-26 18:26:25.453890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.397 [2024-07-26 18:26:25.453917] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.397 [2024-07-26 18:26:25.453931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.397 [2024-07-26 18:26:25.453943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.397 [2024-07-26 18:26:25.454046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.397 [2024-07-26 18:26:25.454100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.398 [2024-07-26 18:26:25.454142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:59.398 [2024-07-26 18:26:25.454144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.398 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:59.398 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:25:59.398 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:59.398 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:59.398 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.398 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.398 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:25:59.398 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:59.398 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:59.398 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.398 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.398 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.657 [2024-07-26 18:26:25.687576] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.657 Malloc1 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.657 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.658 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.658 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.658 [2024-07-26 18:26:25.740643] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.658 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.658 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1539341 00:25:59.658 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:25:59.658 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:59.658 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.192 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:02.192 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.192 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.192 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.192 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:02.192 "tick_rate": 2700000000, 00:26:02.192 "poll_groups": [ 00:26:02.192 { 00:26:02.192 "name": "nvmf_tgt_poll_group_000", 00:26:02.192 "admin_qpairs": 1, 00:26:02.192 "io_qpairs": 1, 00:26:02.192 "current_admin_qpairs": 1, 00:26:02.192 "current_io_qpairs": 1, 00:26:02.192 "pending_bdev_io": 0, 00:26:02.192 "completed_nvme_io": 20097, 00:26:02.192 "transports": [ 00:26:02.192 { 00:26:02.192 "trtype": "TCP" 00:26:02.192 } 00:26:02.192 ] 00:26:02.192 }, 00:26:02.192 { 00:26:02.192 "name": "nvmf_tgt_poll_group_001", 00:26:02.192 "admin_qpairs": 0, 00:26:02.192 "io_qpairs": 1, 00:26:02.192 "current_admin_qpairs": 0, 00:26:02.192 "current_io_qpairs": 1, 00:26:02.192 "pending_bdev_io": 0, 00:26:02.192 "completed_nvme_io": 19890, 00:26:02.192 "transports": [ 00:26:02.192 { 00:26:02.192 "trtype": "TCP" 00:26:02.192 } 00:26:02.192 ] 00:26:02.192 }, 00:26:02.192 { 00:26:02.192 "name": "nvmf_tgt_poll_group_002", 00:26:02.192 "admin_qpairs": 0, 00:26:02.192 "io_qpairs": 1, 00:26:02.192 "current_admin_qpairs": 0, 00:26:02.192 "current_io_qpairs": 1, 00:26:02.192 "pending_bdev_io": 0, 00:26:02.192 "completed_nvme_io": 16959, 00:26:02.192 "transports": [ 00:26:02.192 { 00:26:02.192 "trtype": "TCP" 00:26:02.192 } 00:26:02.192 ] 00:26:02.193 }, 00:26:02.193 { 00:26:02.193 "name": "nvmf_tgt_poll_group_003", 00:26:02.193 "admin_qpairs": 0, 00:26:02.193 "io_qpairs": 1, 00:26:02.193 "current_admin_qpairs": 0, 00:26:02.193 "current_io_qpairs": 1, 00:26:02.193 "pending_bdev_io": 0, 00:26:02.193 "completed_nvme_io": 20386, 00:26:02.193 "transports": [ 00:26:02.193 { 00:26:02.193 "trtype": "TCP" 00:26:02.193 } 00:26:02.193 ] 00:26:02.193 } 00:26:02.193 ] 00:26:02.193 }' 00:26:02.193 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:02.193 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:02.193 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:02.193 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:02.193 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1539341 00:26:10.311 Initializing NVMe Controllers 00:26:10.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:10.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:10.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:10.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:10.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:10.311 Initialization complete. Launching workers. 00:26:10.311 ======================================================== 00:26:10.311 Latency(us) 00:26:10.311 Device Information : IOPS MiB/s Average min max 00:26:10.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10727.30 41.90 5966.61 2025.83 8798.63 00:26:10.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10512.80 41.07 6087.99 1185.18 9473.02 00:26:10.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8944.20 34.94 7156.24 3540.32 10440.72 00:26:10.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10662.40 41.65 6002.18 5085.62 7372.27 00:26:10.311 ======================================================== 00:26:10.311 Total : 40846.69 159.56 6267.63 1185.18 10440.72 00:26:10.311 00:26:10.311 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:10.311 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:10.311 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:10.311 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:10.311 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:10.311 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:10.311 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:10.311 rmmod nvme_tcp 00:26:10.311 rmmod nvme_fabrics 00:26:10.311 rmmod nvme_keyring 00:26:10.311 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:10.311 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:10.311 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:10.312 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1539306 ']' 00:26:10.312 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1539306 00:26:10.312 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1539306 ']' 00:26:10.312 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1539306 00:26:10.312 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:10.312 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:10.312 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1539306 00:26:10.312 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:10.312 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:10.312 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1539306' 00:26:10.312 killing process with pid 1539306 00:26:10.312 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1539306 00:26:10.312 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1539306 00:26:10.312 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:10.312 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:10.312 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:10.312 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:10.312 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:10.312 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.312 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.312 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.220 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:12.220 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:12.220 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:12.789 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:14.695 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:20.004 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:20.004 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:20.004 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:20.004 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:20.004 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:20.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:26:20.005 00:26:20.005 --- 10.0.0.2 ping statistics --- 00:26:20.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.005 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:20.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:26:20.005 00:26:20.005 --- 10.0.0.1 ping statistics --- 00:26:20.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.005 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:20.005 net.core.busy_poll = 1 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:20.005 net.core.busy_read = 1 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:20.005 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1541969 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1541969 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1541969 ']' 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:20.005 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.263 [2024-07-26 18:26:46.149019] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:20.264 [2024-07-26 18:26:46.149116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.264 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.264 [2024-07-26 18:26:46.188875] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:20.264 [2024-07-26 18:26:46.214618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.264 [2024-07-26 18:26:46.299556] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.264 [2024-07-26 18:26:46.299621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.264 [2024-07-26 18:26:46.299634] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.264 [2024-07-26 18:26:46.299645] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.264 [2024-07-26 18:26:46.299655] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.264 [2024-07-26 18:26:46.299739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.264 [2024-07-26 18:26:46.299820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:20.264 [2024-07-26 18:26:46.299763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:20.264 [2024-07-26 18:26:46.299822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.264 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:20.264 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:20.264 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:20.264 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:20.264 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.522 [2024-07-26 18:26:46.583572] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:20.522 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.523 Malloc1 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.523 [2024-07-26 18:26:46.636876] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1542114 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:20.523 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:20.782 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.690 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:22.690 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.690 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:22.690 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.690 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:22.690 "tick_rate": 2700000000, 00:26:22.690 "poll_groups": [ 00:26:22.690 { 00:26:22.690 "name": "nvmf_tgt_poll_group_000", 00:26:22.690 "admin_qpairs": 1, 00:26:22.690 "io_qpairs": 1, 00:26:22.690 "current_admin_qpairs": 1, 00:26:22.690 "current_io_qpairs": 1, 00:26:22.690 "pending_bdev_io": 0, 00:26:22.690 "completed_nvme_io": 23275, 00:26:22.690 "transports": [ 00:26:22.690 { 00:26:22.690 "trtype": "TCP" 00:26:22.690 } 00:26:22.690 ] 00:26:22.690 }, 00:26:22.690 { 00:26:22.690 "name": "nvmf_tgt_poll_group_001", 00:26:22.690 "admin_qpairs": 0, 00:26:22.690 "io_qpairs": 3, 00:26:22.690 "current_admin_qpairs": 0, 00:26:22.690 "current_io_qpairs": 3, 00:26:22.690 "pending_bdev_io": 0, 00:26:22.690 "completed_nvme_io": 25156, 00:26:22.690 "transports": [ 00:26:22.690 { 00:26:22.690 "trtype": "TCP" 00:26:22.690 } 00:26:22.690 ] 00:26:22.690 }, 00:26:22.690 { 00:26:22.690 "name": "nvmf_tgt_poll_group_002", 00:26:22.690 "admin_qpairs": 0, 00:26:22.690 "io_qpairs": 0, 00:26:22.690 "current_admin_qpairs": 0, 00:26:22.690 "current_io_qpairs": 0, 00:26:22.690 "pending_bdev_io": 0, 00:26:22.690 "completed_nvme_io": 0, 00:26:22.690 "transports": [ 00:26:22.690 { 00:26:22.690 "trtype": "TCP" 00:26:22.690 } 00:26:22.690 ] 00:26:22.690 }, 00:26:22.690 { 00:26:22.690 "name": "nvmf_tgt_poll_group_003", 00:26:22.690 "admin_qpairs": 0, 00:26:22.690 "io_qpairs": 0, 00:26:22.690 "current_admin_qpairs": 0, 00:26:22.690 "current_io_qpairs": 0, 00:26:22.690 "pending_bdev_io": 0, 00:26:22.690 "completed_nvme_io": 0, 00:26:22.690 "transports": [ 00:26:22.690 { 00:26:22.690 "trtype": "TCP" 00:26:22.690 } 00:26:22.690 ] 00:26:22.690 } 00:26:22.690 ] 00:26:22.690 }' 00:26:22.690 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:22.690 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:22.690 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:22.690 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:22.690 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1542114 00:26:30.810 Initializing NVMe Controllers 00:26:30.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:30.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:30.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:30.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:30.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:30.810 Initialization complete. Launching workers. 00:26:30.810 ======================================================== 00:26:30.810 Latency(us) 00:26:30.810 Device Information : IOPS MiB/s Average min max 00:26:30.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4233.80 16.54 15125.22 1872.48 62739.89 00:26:30.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4229.70 16.52 15166.85 3289.70 62632.15 00:26:30.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12138.00 47.41 5272.53 1444.95 48625.37 00:26:30.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4847.30 18.93 13211.45 1900.81 63629.98 00:26:30.811 ======================================================== 00:26:30.811 Total : 25448.79 99.41 10068.30 1444.95 63629.98 00:26:30.811 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.811 rmmod nvme_tcp 00:26:30.811 rmmod nvme_fabrics 00:26:30.811 rmmod nvme_keyring 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1541969 ']' 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1541969 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1541969 ']' 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1541969 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1541969 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1541969' 00:26:30.811 killing process with pid 1541969 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1541969 00:26:30.811 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1541969 00:26:31.071 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:31.071 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:31.071 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:31.071 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:31.071 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:31.071 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.071 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.071 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.609 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:33.609 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:33.609 00:26:33.609 real 0m43.775s 00:26:33.609 user 2m33.983s 00:26:33.609 sys 0m11.928s 00:26:33.609 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:33.609 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:33.609 ************************************ 00:26:33.609 END TEST nvmf_perf_adq 00:26:33.609 ************************************ 00:26:33.609 18:26:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:33.609 18:26:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:33.609 18:26:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:33.609 18:26:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:33.609 ************************************ 00:26:33.609 START TEST nvmf_shutdown 00:26:33.609 ************************************ 00:26:33.609 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:33.609 * Looking for test storage... 00:26:33.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:33.610 ************************************ 00:26:33.610 START TEST nvmf_shutdown_tc1 00:26:33.610 ************************************ 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:33.610 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:35.510 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:35.510 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:35.510 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:35.510 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:35.510 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:35.510 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:35.510 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:35.510 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:35.511 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:35.511 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:35.511 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:35.511 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:35.511 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:35.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:26:35.512 00:26:35.512 --- 10.0.0.2 ping statistics --- 00:26:35.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.512 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:35.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:26:35.512 00:26:35.512 --- 10.0.0.1 ping statistics --- 00:26:35.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.512 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1545325 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1545325 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1545325 ']' 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:35.512 [2024-07-26 18:27:01.372868] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:35.512 [2024-07-26 18:27:01.372946] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.512 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.512 [2024-07-26 18:27:01.411760] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:35.512 [2024-07-26 18:27:01.437522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:35.512 [2024-07-26 18:27:01.530365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.512 [2024-07-26 18:27:01.530424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.512 [2024-07-26 18:27:01.530450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.512 [2024-07-26 18:27:01.530464] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.512 [2024-07-26 18:27:01.530475] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.512 [2024-07-26 18:27:01.530564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.512 [2024-07-26 18:27:01.530684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.512 [2024-07-26 18:27:01.530748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:35.512 [2024-07-26 18:27:01.530751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:35.512 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:35.772 [2024-07-26 18:27:01.677413] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:35.772 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.773 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:35.773 Malloc1 00:26:35.773 [2024-07-26 18:27:01.752535] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.773 Malloc2 00:26:35.773 Malloc3 00:26:35.773 Malloc4 00:26:36.032 Malloc5 00:26:36.032 Malloc6 00:26:36.032 Malloc7 00:26:36.032 Malloc8 00:26:36.032 Malloc9 00:26:36.032 Malloc10 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1545447 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1545447 /var/tmp/bdevperf.sock 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1545447 ']' 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:36.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.290 { 00:26:36.290 "params": { 00:26:36.290 "name": "Nvme$subsystem", 00:26:36.290 "trtype": "$TEST_TRANSPORT", 00:26:36.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.290 "adrfam": "ipv4", 00:26:36.290 "trsvcid": "$NVMF_PORT", 00:26:36.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.290 "hdgst": ${hdgst:-false}, 00:26:36.290 "ddgst": ${ddgst:-false} 00:26:36.290 }, 00:26:36.290 "method": "bdev_nvme_attach_controller" 00:26:36.290 } 00:26:36.290 EOF 00:26:36.290 )") 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.290 { 00:26:36.290 "params": { 00:26:36.290 "name": "Nvme$subsystem", 00:26:36.290 "trtype": "$TEST_TRANSPORT", 00:26:36.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.290 "adrfam": "ipv4", 00:26:36.290 "trsvcid": "$NVMF_PORT", 00:26:36.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.290 "hdgst": ${hdgst:-false}, 00:26:36.290 "ddgst": ${ddgst:-false} 00:26:36.290 }, 00:26:36.290 "method": "bdev_nvme_attach_controller" 00:26:36.290 } 00:26:36.290 EOF 00:26:36.290 )") 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.290 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.290 { 00:26:36.290 "params": { 00:26:36.290 "name": "Nvme$subsystem", 00:26:36.291 "trtype": "$TEST_TRANSPORT", 00:26:36.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.291 "adrfam": "ipv4", 00:26:36.291 "trsvcid": "$NVMF_PORT", 00:26:36.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.291 "hdgst": ${hdgst:-false}, 00:26:36.291 "ddgst": ${ddgst:-false} 00:26:36.291 }, 00:26:36.291 "method": "bdev_nvme_attach_controller" 00:26:36.291 } 00:26:36.291 EOF 00:26:36.291 )") 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.291 { 00:26:36.291 "params": { 00:26:36.291 "name": "Nvme$subsystem", 00:26:36.291 "trtype": "$TEST_TRANSPORT", 00:26:36.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.291 "adrfam": "ipv4", 00:26:36.291 "trsvcid": "$NVMF_PORT", 00:26:36.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.291 "hdgst": ${hdgst:-false}, 00:26:36.291 "ddgst": ${ddgst:-false} 00:26:36.291 }, 00:26:36.291 "method": "bdev_nvme_attach_controller" 00:26:36.291 } 00:26:36.291 EOF 00:26:36.291 )") 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.291 { 00:26:36.291 "params": { 00:26:36.291 "name": "Nvme$subsystem", 00:26:36.291 "trtype": "$TEST_TRANSPORT", 00:26:36.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.291 "adrfam": "ipv4", 00:26:36.291 "trsvcid": "$NVMF_PORT", 00:26:36.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.291 "hdgst": ${hdgst:-false}, 00:26:36.291 "ddgst": ${ddgst:-false} 00:26:36.291 }, 00:26:36.291 "method": "bdev_nvme_attach_controller" 00:26:36.291 } 00:26:36.291 EOF 00:26:36.291 )") 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.291 { 00:26:36.291 "params": { 00:26:36.291 "name": "Nvme$subsystem", 00:26:36.291 "trtype": "$TEST_TRANSPORT", 00:26:36.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.291 "adrfam": "ipv4", 00:26:36.291 "trsvcid": "$NVMF_PORT", 00:26:36.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.291 "hdgst": ${hdgst:-false}, 00:26:36.291 "ddgst": ${ddgst:-false} 00:26:36.291 }, 00:26:36.291 "method": "bdev_nvme_attach_controller" 00:26:36.291 } 00:26:36.291 EOF 00:26:36.291 )") 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.291 { 00:26:36.291 "params": { 00:26:36.291 "name": "Nvme$subsystem", 00:26:36.291 "trtype": "$TEST_TRANSPORT", 00:26:36.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.291 "adrfam": "ipv4", 00:26:36.291 "trsvcid": "$NVMF_PORT", 00:26:36.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.291 "hdgst": ${hdgst:-false}, 00:26:36.291 "ddgst": ${ddgst:-false} 00:26:36.291 }, 00:26:36.291 "method": "bdev_nvme_attach_controller" 00:26:36.291 } 00:26:36.291 EOF 00:26:36.291 )") 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.291 { 00:26:36.291 "params": { 00:26:36.291 "name": "Nvme$subsystem", 00:26:36.291 "trtype": "$TEST_TRANSPORT", 00:26:36.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.291 "adrfam": "ipv4", 00:26:36.291 "trsvcid": "$NVMF_PORT", 00:26:36.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.291 "hdgst": ${hdgst:-false}, 00:26:36.291 "ddgst": ${ddgst:-false} 00:26:36.291 }, 00:26:36.291 "method": "bdev_nvme_attach_controller" 00:26:36.291 } 00:26:36.291 EOF 00:26:36.291 )") 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.291 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.291 { 00:26:36.291 "params": { 00:26:36.291 "name": "Nvme$subsystem", 00:26:36.291 "trtype": "$TEST_TRANSPORT", 00:26:36.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.291 "adrfam": "ipv4", 00:26:36.291 "trsvcid": "$NVMF_PORT", 00:26:36.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.291 "hdgst": ${hdgst:-false}, 00:26:36.291 "ddgst": ${ddgst:-false} 00:26:36.291 }, 00:26:36.291 "method": "bdev_nvme_attach_controller" 00:26:36.291 } 00:26:36.291 EOF 00:26:36.291 )") 00:26:36.292 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:36.292 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.292 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.292 { 00:26:36.292 "params": { 00:26:36.292 "name": "Nvme$subsystem", 00:26:36.292 "trtype": "$TEST_TRANSPORT", 00:26:36.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.292 "adrfam": "ipv4", 00:26:36.292 "trsvcid": "$NVMF_PORT", 00:26:36.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.292 "hdgst": ${hdgst:-false}, 00:26:36.292 "ddgst": ${ddgst:-false} 00:26:36.292 }, 00:26:36.292 "method": "bdev_nvme_attach_controller" 00:26:36.292 } 00:26:36.292 EOF 00:26:36.292 )") 00:26:36.292 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:36.292 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:36.292 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:36.292 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:36.292 "params": { 00:26:36.292 "name": "Nvme1", 00:26:36.292 "trtype": "tcp", 00:26:36.292 "traddr": "10.0.0.2", 00:26:36.292 "adrfam": "ipv4", 00:26:36.292 "trsvcid": "4420", 00:26:36.292 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.292 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.292 "hdgst": false, 00:26:36.292 "ddgst": false 00:26:36.292 }, 00:26:36.292 "method": "bdev_nvme_attach_controller" 00:26:36.292 },{ 00:26:36.292 "params": { 00:26:36.292 "name": "Nvme2", 00:26:36.292 "trtype": "tcp", 00:26:36.292 "traddr": "10.0.0.2", 00:26:36.292 "adrfam": "ipv4", 00:26:36.292 "trsvcid": "4420", 00:26:36.292 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:36.292 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:36.292 "hdgst": false, 00:26:36.292 "ddgst": false 00:26:36.292 }, 00:26:36.292 "method": "bdev_nvme_attach_controller" 00:26:36.292 },{ 00:26:36.292 "params": { 00:26:36.292 "name": "Nvme3", 00:26:36.292 "trtype": "tcp", 00:26:36.292 "traddr": "10.0.0.2", 00:26:36.292 "adrfam": "ipv4", 00:26:36.292 "trsvcid": "4420", 00:26:36.292 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:36.292 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:36.292 "hdgst": false, 00:26:36.292 "ddgst": false 00:26:36.292 }, 00:26:36.292 "method": "bdev_nvme_attach_controller" 00:26:36.292 },{ 00:26:36.292 "params": { 00:26:36.292 "name": "Nvme4", 00:26:36.292 "trtype": "tcp", 00:26:36.292 "traddr": "10.0.0.2", 00:26:36.292 "adrfam": "ipv4", 00:26:36.292 "trsvcid": "4420", 00:26:36.292 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:36.292 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:36.292 "hdgst": false, 00:26:36.292 "ddgst": false 00:26:36.292 }, 00:26:36.292 "method": "bdev_nvme_attach_controller" 00:26:36.292 },{ 00:26:36.292 "params": { 00:26:36.292 "name": "Nvme5", 00:26:36.292 "trtype": "tcp", 00:26:36.292 "traddr": "10.0.0.2", 00:26:36.292 "adrfam": "ipv4", 00:26:36.292 "trsvcid": "4420", 00:26:36.292 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:36.292 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:36.292 "hdgst": false, 00:26:36.292 "ddgst": false 00:26:36.292 }, 00:26:36.292 "method": "bdev_nvme_attach_controller" 00:26:36.292 },{ 00:26:36.292 "params": { 00:26:36.292 "name": "Nvme6", 00:26:36.292 "trtype": "tcp", 00:26:36.292 "traddr": "10.0.0.2", 00:26:36.292 "adrfam": "ipv4", 00:26:36.292 "trsvcid": "4420", 00:26:36.292 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:36.292 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:36.292 "hdgst": false, 00:26:36.292 "ddgst": false 00:26:36.292 }, 00:26:36.292 "method": "bdev_nvme_attach_controller" 00:26:36.292 },{ 00:26:36.292 "params": { 00:26:36.292 "name": "Nvme7", 00:26:36.292 "trtype": "tcp", 00:26:36.292 "traddr": "10.0.0.2", 00:26:36.292 "adrfam": "ipv4", 00:26:36.292 "trsvcid": "4420", 00:26:36.292 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:36.292 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:36.292 "hdgst": false, 00:26:36.292 "ddgst": false 00:26:36.292 }, 00:26:36.292 "method": "bdev_nvme_attach_controller" 00:26:36.292 },{ 00:26:36.292 "params": { 00:26:36.292 "name": "Nvme8", 00:26:36.292 "trtype": "tcp", 00:26:36.292 "traddr": "10.0.0.2", 00:26:36.292 "adrfam": "ipv4", 00:26:36.292 "trsvcid": "4420", 00:26:36.292 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:36.292 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:36.292 "hdgst": false, 00:26:36.292 "ddgst": false 00:26:36.292 }, 00:26:36.292 "method": "bdev_nvme_attach_controller" 00:26:36.292 },{ 00:26:36.292 "params": { 00:26:36.292 "name": "Nvme9", 00:26:36.292 "trtype": "tcp", 00:26:36.292 "traddr": "10.0.0.2", 00:26:36.292 "adrfam": "ipv4", 00:26:36.292 "trsvcid": "4420", 00:26:36.292 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:36.292 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:36.292 "hdgst": false, 00:26:36.292 "ddgst": false 00:26:36.292 }, 00:26:36.292 "method": "bdev_nvme_attach_controller" 00:26:36.292 },{ 00:26:36.292 "params": { 00:26:36.292 "name": "Nvme10", 00:26:36.292 "trtype": "tcp", 00:26:36.292 "traddr": "10.0.0.2", 00:26:36.293 "adrfam": "ipv4", 00:26:36.293 "trsvcid": "4420", 00:26:36.293 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:36.293 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:36.293 "hdgst": false, 00:26:36.293 "ddgst": false 00:26:36.293 }, 00:26:36.293 "method": "bdev_nvme_attach_controller" 00:26:36.293 }' 00:26:36.293 [2024-07-26 18:27:02.254710] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:36.293 [2024-07-26 18:27:02.254784] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:36.293 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.293 [2024-07-26 18:27:02.290673] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:36.293 [2024-07-26 18:27:02.320175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.293 [2024-07-26 18:27:02.407994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.195 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:38.195 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:38.195 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:38.195 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.195 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:38.195 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.195 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1545447 00:26:38.195 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:38.195 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:26:39.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1545447 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1545325 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.130 { 00:26:39.130 "params": { 00:26:39.130 "name": "Nvme$subsystem", 00:26:39.130 "trtype": "$TEST_TRANSPORT", 00:26:39.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.130 "adrfam": "ipv4", 00:26:39.130 "trsvcid": "$NVMF_PORT", 00:26:39.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.130 "hdgst": ${hdgst:-false}, 00:26:39.130 "ddgst": ${ddgst:-false} 00:26:39.130 }, 00:26:39.130 "method": "bdev_nvme_attach_controller" 00:26:39.130 } 00:26:39.130 EOF 00:26:39.130 )") 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.130 { 00:26:39.130 "params": { 00:26:39.130 "name": "Nvme$subsystem", 00:26:39.130 "trtype": "$TEST_TRANSPORT", 00:26:39.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.130 "adrfam": "ipv4", 00:26:39.130 "trsvcid": "$NVMF_PORT", 00:26:39.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.130 "hdgst": ${hdgst:-false}, 00:26:39.130 "ddgst": ${ddgst:-false} 00:26:39.130 }, 00:26:39.130 "method": "bdev_nvme_attach_controller" 00:26:39.130 } 00:26:39.130 EOF 00:26:39.130 )") 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.130 { 00:26:39.130 "params": { 00:26:39.130 "name": "Nvme$subsystem", 00:26:39.130 "trtype": "$TEST_TRANSPORT", 00:26:39.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.130 "adrfam": "ipv4", 00:26:39.130 "trsvcid": "$NVMF_PORT", 00:26:39.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.130 "hdgst": ${hdgst:-false}, 00:26:39.130 "ddgst": ${ddgst:-false} 00:26:39.130 }, 00:26:39.130 "method": "bdev_nvme_attach_controller" 00:26:39.130 } 00:26:39.130 EOF 00:26:39.130 )") 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.130 { 00:26:39.130 "params": { 00:26:39.130 "name": "Nvme$subsystem", 00:26:39.130 "trtype": "$TEST_TRANSPORT", 00:26:39.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.130 "adrfam": "ipv4", 00:26:39.130 "trsvcid": "$NVMF_PORT", 00:26:39.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.130 "hdgst": ${hdgst:-false}, 00:26:39.130 "ddgst": ${ddgst:-false} 00:26:39.130 }, 00:26:39.130 "method": "bdev_nvme_attach_controller" 00:26:39.130 } 00:26:39.130 EOF 00:26:39.130 )") 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:39.130 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.131 { 00:26:39.131 "params": { 00:26:39.131 "name": "Nvme$subsystem", 00:26:39.131 "trtype": "$TEST_TRANSPORT", 00:26:39.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.131 "adrfam": "ipv4", 00:26:39.131 "trsvcid": "$NVMF_PORT", 00:26:39.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.131 "hdgst": ${hdgst:-false}, 00:26:39.131 "ddgst": ${ddgst:-false} 00:26:39.131 }, 00:26:39.131 "method": "bdev_nvme_attach_controller" 00:26:39.131 } 00:26:39.131 EOF 00:26:39.131 )") 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.131 { 00:26:39.131 "params": { 00:26:39.131 "name": "Nvme$subsystem", 00:26:39.131 "trtype": "$TEST_TRANSPORT", 00:26:39.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.131 "adrfam": "ipv4", 00:26:39.131 "trsvcid": "$NVMF_PORT", 00:26:39.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.131 "hdgst": ${hdgst:-false}, 00:26:39.131 "ddgst": ${ddgst:-false} 00:26:39.131 }, 00:26:39.131 "method": "bdev_nvme_attach_controller" 00:26:39.131 } 00:26:39.131 EOF 00:26:39.131 )") 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.131 { 00:26:39.131 "params": { 00:26:39.131 "name": "Nvme$subsystem", 00:26:39.131 "trtype": "$TEST_TRANSPORT", 00:26:39.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.131 "adrfam": "ipv4", 00:26:39.131 "trsvcid": "$NVMF_PORT", 00:26:39.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.131 "hdgst": ${hdgst:-false}, 00:26:39.131 "ddgst": ${ddgst:-false} 00:26:39.131 }, 00:26:39.131 "method": "bdev_nvme_attach_controller" 00:26:39.131 } 00:26:39.131 EOF 00:26:39.131 )") 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.131 { 00:26:39.131 "params": { 00:26:39.131 "name": "Nvme$subsystem", 00:26:39.131 "trtype": "$TEST_TRANSPORT", 00:26:39.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.131 "adrfam": "ipv4", 00:26:39.131 "trsvcid": "$NVMF_PORT", 00:26:39.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.131 "hdgst": ${hdgst:-false}, 00:26:39.131 "ddgst": ${ddgst:-false} 00:26:39.131 }, 00:26:39.131 "method": "bdev_nvme_attach_controller" 00:26:39.131 } 00:26:39.131 EOF 00:26:39.131 )") 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.131 { 00:26:39.131 "params": { 00:26:39.131 "name": "Nvme$subsystem", 00:26:39.131 "trtype": "$TEST_TRANSPORT", 00:26:39.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.131 "adrfam": "ipv4", 00:26:39.131 "trsvcid": "$NVMF_PORT", 00:26:39.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.131 "hdgst": ${hdgst:-false}, 00:26:39.131 "ddgst": ${ddgst:-false} 00:26:39.131 }, 00:26:39.131 "method": "bdev_nvme_attach_controller" 00:26:39.131 } 00:26:39.131 EOF 00:26:39.131 )") 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.131 { 00:26:39.131 "params": { 00:26:39.131 "name": "Nvme$subsystem", 00:26:39.131 "trtype": "$TEST_TRANSPORT", 00:26:39.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.131 "adrfam": "ipv4", 00:26:39.131 "trsvcid": "$NVMF_PORT", 00:26:39.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.131 "hdgst": ${hdgst:-false}, 00:26:39.131 "ddgst": ${ddgst:-false} 00:26:39.131 }, 00:26:39.131 "method": "bdev_nvme_attach_controller" 00:26:39.131 } 00:26:39.131 EOF 00:26:39.131 )") 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:39.131 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:39.389 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:39.389 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:39.389 "params": { 00:26:39.389 "name": "Nvme1", 00:26:39.389 "trtype": "tcp", 00:26:39.389 "traddr": "10.0.0.2", 00:26:39.389 "adrfam": "ipv4", 00:26:39.389 "trsvcid": "4420", 00:26:39.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:39.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:39.389 "hdgst": false, 00:26:39.389 "ddgst": false 00:26:39.389 }, 00:26:39.389 "method": "bdev_nvme_attach_controller" 00:26:39.389 },{ 00:26:39.389 "params": { 00:26:39.389 "name": "Nvme2", 00:26:39.389 "trtype": "tcp", 00:26:39.389 "traddr": "10.0.0.2", 00:26:39.389 "adrfam": "ipv4", 00:26:39.389 "trsvcid": "4420", 00:26:39.389 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:39.389 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:39.389 "hdgst": false, 00:26:39.389 "ddgst": false 00:26:39.389 }, 00:26:39.389 "method": "bdev_nvme_attach_controller" 00:26:39.389 },{ 00:26:39.389 "params": { 00:26:39.389 "name": "Nvme3", 00:26:39.389 "trtype": "tcp", 00:26:39.389 "traddr": "10.0.0.2", 00:26:39.389 "adrfam": "ipv4", 00:26:39.389 "trsvcid": "4420", 00:26:39.389 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:39.389 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:39.389 "hdgst": false, 00:26:39.389 "ddgst": false 00:26:39.389 }, 00:26:39.389 "method": "bdev_nvme_attach_controller" 00:26:39.389 },{ 00:26:39.389 "params": { 00:26:39.389 "name": "Nvme4", 00:26:39.389 "trtype": "tcp", 00:26:39.389 "traddr": "10.0.0.2", 00:26:39.389 "adrfam": "ipv4", 00:26:39.389 "trsvcid": "4420", 00:26:39.389 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:39.389 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:39.389 "hdgst": false, 00:26:39.389 "ddgst": false 00:26:39.389 }, 00:26:39.389 "method": "bdev_nvme_attach_controller" 00:26:39.389 },{ 00:26:39.389 "params": { 00:26:39.389 "name": "Nvme5", 00:26:39.389 "trtype": "tcp", 00:26:39.389 "traddr": "10.0.0.2", 00:26:39.389 "adrfam": "ipv4", 00:26:39.389 "trsvcid": "4420", 00:26:39.389 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:39.389 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:39.389 "hdgst": false, 00:26:39.389 "ddgst": false 00:26:39.389 }, 00:26:39.389 "method": "bdev_nvme_attach_controller" 00:26:39.389 },{ 00:26:39.389 "params": { 00:26:39.389 "name": "Nvme6", 00:26:39.389 "trtype": "tcp", 00:26:39.389 "traddr": "10.0.0.2", 00:26:39.389 "adrfam": "ipv4", 00:26:39.389 "trsvcid": "4420", 00:26:39.389 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:39.389 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:39.389 "hdgst": false, 00:26:39.389 "ddgst": false 00:26:39.389 }, 00:26:39.389 "method": "bdev_nvme_attach_controller" 00:26:39.389 },{ 00:26:39.389 "params": { 00:26:39.389 "name": "Nvme7", 00:26:39.389 "trtype": "tcp", 00:26:39.389 "traddr": "10.0.0.2", 00:26:39.389 "adrfam": "ipv4", 00:26:39.389 "trsvcid": "4420", 00:26:39.389 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:39.389 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:39.389 "hdgst": false, 00:26:39.389 "ddgst": false 00:26:39.389 }, 00:26:39.389 "method": "bdev_nvme_attach_controller" 00:26:39.389 },{ 00:26:39.389 "params": { 00:26:39.389 "name": "Nvme8", 00:26:39.389 "trtype": "tcp", 00:26:39.389 "traddr": "10.0.0.2", 00:26:39.389 "adrfam": "ipv4", 00:26:39.389 "trsvcid": "4420", 00:26:39.389 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:39.389 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:39.389 "hdgst": false, 00:26:39.389 "ddgst": false 00:26:39.389 }, 00:26:39.389 "method": "bdev_nvme_attach_controller" 00:26:39.389 },{ 00:26:39.389 "params": { 00:26:39.389 "name": "Nvme9", 00:26:39.389 "trtype": "tcp", 00:26:39.389 "traddr": "10.0.0.2", 00:26:39.389 "adrfam": "ipv4", 00:26:39.389 "trsvcid": "4420", 00:26:39.389 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:39.389 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:39.389 "hdgst": false, 00:26:39.389 "ddgst": false 00:26:39.389 }, 00:26:39.389 "method": "bdev_nvme_attach_controller" 00:26:39.389 },{ 00:26:39.389 "params": { 00:26:39.389 "name": "Nvme10", 00:26:39.389 "trtype": "tcp", 00:26:39.389 "traddr": "10.0.0.2", 00:26:39.389 "adrfam": "ipv4", 00:26:39.389 "trsvcid": "4420", 00:26:39.389 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:39.389 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:39.389 "hdgst": false, 00:26:39.389 "ddgst": false 00:26:39.389 }, 00:26:39.389 "method": "bdev_nvme_attach_controller" 00:26:39.389 }' 00:26:39.389 [2024-07-26 18:27:05.284640] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:39.389 [2024-07-26 18:27:05.284730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545863 ] 00:26:39.389 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.389 [2024-07-26 18:27:05.321818] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:39.389 [2024-07-26 18:27:05.350639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.389 [2024-07-26 18:27:05.437089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.293 Running I/O for 1 seconds... 00:26:42.229 00:26:42.229 Latency(us) 00:26:42.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.229 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.229 Verification LBA range: start 0x0 length 0x400 00:26:42.229 Nvme1n1 : 1.15 221.93 13.87 0.00 0.00 285455.55 20388.98 257872.02 00:26:42.229 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.229 Verification LBA range: start 0x0 length 0x400 00:26:42.229 Nvme2n1 : 1.14 223.86 13.99 0.00 0.00 278571.43 23495.87 254765.13 00:26:42.229 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.229 Verification LBA range: start 0x0 length 0x400 00:26:42.229 Nvme3n1 : 1.07 239.33 14.96 0.00 0.00 255404.94 18058.81 254765.13 00:26:42.229 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.229 Verification LBA range: start 0x0 length 0x400 00:26:42.229 Nvme4n1 : 1.15 278.20 17.39 0.00 0.00 216814.33 17282.09 250104.79 00:26:42.229 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.229 Verification LBA range: start 0x0 length 0x400 00:26:42.229 Nvme5n1 : 1.18 216.98 13.56 0.00 0.00 272212.01 22330.79 270299.59 00:26:42.229 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.229 Verification LBA range: start 0x0 length 0x400 00:26:42.229 Nvme6n1 : 1.14 224.63 14.04 0.00 0.00 259154.11 22233.69 257872.02 00:26:42.229 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.229 Verification LBA range: start 0x0 length 0x400 00:26:42.229 Nvme7n1 : 1.14 225.43 14.09 0.00 0.00 253553.78 19029.71 256318.58 00:26:42.229 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.229 Verification LBA range: start 0x0 length 0x400 00:26:42.229 Nvme8n1 : 1.19 269.18 16.82 0.00 0.00 210005.11 16311.18 259425.47 00:26:42.229 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.229 Verification LBA range: start 0x0 length 0x400 00:26:42.229 Nvme9n1 : 1.19 214.67 13.42 0.00 0.00 259129.27 30874.74 282727.16 00:26:42.229 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.229 Verification LBA range: start 0x0 length 0x400 00:26:42.229 Nvme10n1 : 1.21 265.47 16.59 0.00 0.00 206367.71 10194.49 281173.71 00:26:42.229 =================================================================================================================== 00:26:42.229 Total : 2379.67 148.73 0.00 0.00 246973.49 10194.49 282727.16 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:42.494 rmmod nvme_tcp 00:26:42.494 rmmod nvme_fabrics 00:26:42.494 rmmod nvme_keyring 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1545325 ']' 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1545325 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1545325 ']' 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1545325 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1545325 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1545325' 00:26:42.494 killing process with pid 1545325 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1545325 00:26:42.494 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1545325 00:26:43.107 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:43.107 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:43.107 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:43.107 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:43.107 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:43.107 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.107 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.107 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:45.014 00:26:45.014 real 0m11.724s 00:26:45.014 user 0m34.705s 00:26:45.014 sys 0m3.118s 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:45.014 ************************************ 00:26:45.014 END TEST nvmf_shutdown_tc1 00:26:45.014 ************************************ 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:45.014 ************************************ 00:26:45.014 START TEST nvmf_shutdown_tc2 00:26:45.014 ************************************ 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:45.014 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:45.014 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:45.014 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.014 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:45.015 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.015 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:45.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:26:45.273 00:26:45.273 --- 10.0.0.2 ping statistics --- 00:26:45.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.273 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:26:45.273 00:26:45.273 --- 10.0.0.1 ping statistics --- 00:26:45.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.273 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1547137 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1547137 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1547137 ']' 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.273 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:45.274 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.274 [2024-07-26 18:27:11.363196] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:45.274 [2024-07-26 18:27:11.363284] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.274 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.274 [2024-07-26 18:27:11.405020] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:45.532 [2024-07-26 18:27:11.434799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:45.532 [2024-07-26 18:27:11.529974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.532 [2024-07-26 18:27:11.530025] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.532 [2024-07-26 18:27:11.530071] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.532 [2024-07-26 18:27:11.530085] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.532 [2024-07-26 18:27:11.530107] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.532 [2024-07-26 18:27:11.530165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.532 [2024-07-26 18:27:11.530222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.532 [2024-07-26 18:27:11.530278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.532 [2024-07-26 18:27:11.530274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:45.532 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:45.532 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:45.532 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:45.532 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:45.532 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.532 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.532 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:45.532 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.532 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.792 [2024-07-26 18:27:11.678273] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.792 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.792 Malloc1 00:26:45.792 [2024-07-26 18:27:11.767476] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.792 Malloc2 00:26:45.792 Malloc3 00:26:45.792 Malloc4 00:26:46.052 Malloc5 00:26:46.052 Malloc6 00:26:46.052 Malloc7 00:26:46.052 Malloc8 00:26:46.052 Malloc9 00:26:46.311 Malloc10 00:26:46.311 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.311 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:46.311 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:46.311 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.311 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1547314 00:26:46.311 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1547314 /var/tmp/bdevperf.sock 00:26:46.311 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1547314 ']' 00:26:46.311 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:46.311 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:46.311 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:46.311 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:46.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:46.312 { 00:26:46.312 "params": { 00:26:46.312 "name": "Nvme$subsystem", 00:26:46.312 "trtype": "$TEST_TRANSPORT", 00:26:46.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.312 "adrfam": "ipv4", 00:26:46.312 "trsvcid": "$NVMF_PORT", 00:26:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.312 "hdgst": ${hdgst:-false}, 00:26:46.312 "ddgst": ${ddgst:-false} 00:26:46.312 }, 00:26:46.312 "method": "bdev_nvme_attach_controller" 00:26:46.312 } 00:26:46.312 EOF 00:26:46.312 )") 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:46.312 { 00:26:46.312 "params": { 00:26:46.312 "name": "Nvme$subsystem", 00:26:46.312 "trtype": "$TEST_TRANSPORT", 00:26:46.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.312 "adrfam": "ipv4", 00:26:46.312 "trsvcid": "$NVMF_PORT", 00:26:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.312 "hdgst": ${hdgst:-false}, 00:26:46.312 "ddgst": ${ddgst:-false} 00:26:46.312 }, 00:26:46.312 "method": "bdev_nvme_attach_controller" 00:26:46.312 } 00:26:46.312 EOF 00:26:46.312 )") 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:46.312 { 00:26:46.312 "params": { 00:26:46.312 "name": "Nvme$subsystem", 00:26:46.312 "trtype": "$TEST_TRANSPORT", 00:26:46.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.312 "adrfam": "ipv4", 00:26:46.312 "trsvcid": "$NVMF_PORT", 00:26:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.312 "hdgst": ${hdgst:-false}, 00:26:46.312 "ddgst": ${ddgst:-false} 00:26:46.312 }, 00:26:46.312 "method": "bdev_nvme_attach_controller" 00:26:46.312 } 00:26:46.312 EOF 00:26:46.312 )") 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:46.312 { 00:26:46.312 "params": { 00:26:46.312 "name": "Nvme$subsystem", 00:26:46.312 "trtype": "$TEST_TRANSPORT", 00:26:46.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.312 "adrfam": "ipv4", 00:26:46.312 "trsvcid": "$NVMF_PORT", 00:26:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.312 "hdgst": ${hdgst:-false}, 00:26:46.312 "ddgst": ${ddgst:-false} 00:26:46.312 }, 00:26:46.312 "method": "bdev_nvme_attach_controller" 00:26:46.312 } 00:26:46.312 EOF 00:26:46.312 )") 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:46.312 { 00:26:46.312 "params": { 00:26:46.312 "name": "Nvme$subsystem", 00:26:46.312 "trtype": "$TEST_TRANSPORT", 00:26:46.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.312 "adrfam": "ipv4", 00:26:46.312 "trsvcid": "$NVMF_PORT", 00:26:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.312 "hdgst": ${hdgst:-false}, 00:26:46.312 "ddgst": ${ddgst:-false} 00:26:46.312 }, 00:26:46.312 "method": "bdev_nvme_attach_controller" 00:26:46.312 } 00:26:46.312 EOF 00:26:46.312 )") 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:46.312 { 00:26:46.312 "params": { 00:26:46.312 "name": "Nvme$subsystem", 00:26:46.312 "trtype": "$TEST_TRANSPORT", 00:26:46.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.312 "adrfam": "ipv4", 00:26:46.312 "trsvcid": "$NVMF_PORT", 00:26:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.312 "hdgst": ${hdgst:-false}, 00:26:46.312 "ddgst": ${ddgst:-false} 00:26:46.312 }, 00:26:46.312 "method": "bdev_nvme_attach_controller" 00:26:46.312 } 00:26:46.312 EOF 00:26:46.312 )") 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:46.312 { 00:26:46.312 "params": { 00:26:46.312 "name": "Nvme$subsystem", 00:26:46.312 "trtype": "$TEST_TRANSPORT", 00:26:46.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.312 "adrfam": "ipv4", 00:26:46.312 "trsvcid": "$NVMF_PORT", 00:26:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.312 "hdgst": ${hdgst:-false}, 00:26:46.312 "ddgst": ${ddgst:-false} 00:26:46.312 }, 00:26:46.312 "method": "bdev_nvme_attach_controller" 00:26:46.312 } 00:26:46.312 EOF 00:26:46.312 )") 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:46.312 { 00:26:46.312 "params": { 00:26:46.312 "name": "Nvme$subsystem", 00:26:46.312 "trtype": "$TEST_TRANSPORT", 00:26:46.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.312 "adrfam": "ipv4", 00:26:46.312 "trsvcid": "$NVMF_PORT", 00:26:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.312 "hdgst": ${hdgst:-false}, 00:26:46.312 "ddgst": ${ddgst:-false} 00:26:46.312 }, 00:26:46.312 "method": "bdev_nvme_attach_controller" 00:26:46.312 } 00:26:46.312 EOF 00:26:46.312 )") 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:46.312 { 00:26:46.312 "params": { 00:26:46.312 "name": "Nvme$subsystem", 00:26:46.312 "trtype": "$TEST_TRANSPORT", 00:26:46.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.312 "adrfam": "ipv4", 00:26:46.312 "trsvcid": "$NVMF_PORT", 00:26:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.312 "hdgst": ${hdgst:-false}, 00:26:46.312 "ddgst": ${ddgst:-false} 00:26:46.312 }, 00:26:46.312 "method": "bdev_nvme_attach_controller" 00:26:46.312 } 00:26:46.312 EOF 00:26:46.312 )") 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:46.312 { 00:26:46.312 "params": { 00:26:46.312 "name": "Nvme$subsystem", 00:26:46.312 "trtype": "$TEST_TRANSPORT", 00:26:46.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.312 "adrfam": "ipv4", 00:26:46.312 "trsvcid": "$NVMF_PORT", 00:26:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.312 "hdgst": ${hdgst:-false}, 00:26:46.312 "ddgst": ${ddgst:-false} 00:26:46.312 }, 00:26:46.312 "method": "bdev_nvme_attach_controller" 00:26:46.312 } 00:26:46.312 EOF 00:26:46.312 )") 00:26:46.312 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:46.313 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:26:46.313 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:26:46.313 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:46.313 "params": { 00:26:46.313 "name": "Nvme1", 00:26:46.313 "trtype": "tcp", 00:26:46.313 "traddr": "10.0.0.2", 00:26:46.313 "adrfam": "ipv4", 00:26:46.313 "trsvcid": "4420", 00:26:46.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:46.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:46.313 "hdgst": false, 00:26:46.313 "ddgst": false 00:26:46.313 }, 00:26:46.313 "method": "bdev_nvme_attach_controller" 00:26:46.313 },{ 00:26:46.313 "params": { 00:26:46.313 "name": "Nvme2", 00:26:46.313 "trtype": "tcp", 00:26:46.313 "traddr": "10.0.0.2", 00:26:46.313 "adrfam": "ipv4", 00:26:46.313 "trsvcid": "4420", 00:26:46.313 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:46.313 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:46.313 "hdgst": false, 00:26:46.313 "ddgst": false 00:26:46.313 }, 00:26:46.313 "method": "bdev_nvme_attach_controller" 00:26:46.313 },{ 00:26:46.313 "params": { 00:26:46.313 "name": "Nvme3", 00:26:46.313 "trtype": "tcp", 00:26:46.313 "traddr": "10.0.0.2", 00:26:46.313 "adrfam": "ipv4", 00:26:46.313 "trsvcid": "4420", 00:26:46.313 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:46.313 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:46.313 "hdgst": false, 00:26:46.313 "ddgst": false 00:26:46.313 }, 00:26:46.313 "method": "bdev_nvme_attach_controller" 00:26:46.313 },{ 00:26:46.313 "params": { 00:26:46.313 "name": "Nvme4", 00:26:46.313 "trtype": "tcp", 00:26:46.313 "traddr": "10.0.0.2", 00:26:46.313 "adrfam": "ipv4", 00:26:46.313 "trsvcid": "4420", 00:26:46.313 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:46.313 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:46.313 "hdgst": false, 00:26:46.313 "ddgst": false 00:26:46.313 }, 00:26:46.313 "method": "bdev_nvme_attach_controller" 00:26:46.313 },{ 00:26:46.313 "params": { 00:26:46.313 "name": "Nvme5", 00:26:46.313 "trtype": "tcp", 00:26:46.313 "traddr": "10.0.0.2", 00:26:46.313 "adrfam": "ipv4", 00:26:46.313 "trsvcid": "4420", 00:26:46.313 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:46.313 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:46.313 "hdgst": false, 00:26:46.313 "ddgst": false 00:26:46.313 }, 00:26:46.313 "method": "bdev_nvme_attach_controller" 00:26:46.313 },{ 00:26:46.313 "params": { 00:26:46.313 "name": "Nvme6", 00:26:46.313 "trtype": "tcp", 00:26:46.313 "traddr": "10.0.0.2", 00:26:46.313 "adrfam": "ipv4", 00:26:46.313 "trsvcid": "4420", 00:26:46.313 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:46.313 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:46.313 "hdgst": false, 00:26:46.313 "ddgst": false 00:26:46.313 }, 00:26:46.313 "method": "bdev_nvme_attach_controller" 00:26:46.313 },{ 00:26:46.313 "params": { 00:26:46.313 "name": "Nvme7", 00:26:46.313 "trtype": "tcp", 00:26:46.313 "traddr": "10.0.0.2", 00:26:46.313 "adrfam": "ipv4", 00:26:46.313 "trsvcid": "4420", 00:26:46.313 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:46.313 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:46.313 "hdgst": false, 00:26:46.313 "ddgst": false 00:26:46.313 }, 00:26:46.313 "method": "bdev_nvme_attach_controller" 00:26:46.313 },{ 00:26:46.313 "params": { 00:26:46.313 "name": "Nvme8", 00:26:46.313 "trtype": "tcp", 00:26:46.313 "traddr": "10.0.0.2", 00:26:46.313 "adrfam": "ipv4", 00:26:46.313 "trsvcid": "4420", 00:26:46.313 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:46.313 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:46.313 "hdgst": false, 00:26:46.313 "ddgst": false 00:26:46.313 }, 00:26:46.313 "method": "bdev_nvme_attach_controller" 00:26:46.313 },{ 00:26:46.313 "params": { 00:26:46.313 "name": "Nvme9", 00:26:46.313 "trtype": "tcp", 00:26:46.313 "traddr": "10.0.0.2", 00:26:46.313 "adrfam": "ipv4", 00:26:46.313 "trsvcid": "4420", 00:26:46.313 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:46.313 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:46.313 "hdgst": false, 00:26:46.313 "ddgst": false 00:26:46.313 }, 00:26:46.313 "method": "bdev_nvme_attach_controller" 00:26:46.313 },{ 00:26:46.313 "params": { 00:26:46.313 "name": "Nvme10", 00:26:46.313 "trtype": "tcp", 00:26:46.313 "traddr": "10.0.0.2", 00:26:46.313 "adrfam": "ipv4", 00:26:46.313 "trsvcid": "4420", 00:26:46.313 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:46.313 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:46.313 "hdgst": false, 00:26:46.313 "ddgst": false 00:26:46.313 }, 00:26:46.313 "method": "bdev_nvme_attach_controller" 00:26:46.313 }' 00:26:46.313 [2024-07-26 18:27:12.292969] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:46.313 [2024-07-26 18:27:12.293053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1547314 ] 00:26:46.313 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.313 [2024-07-26 18:27:12.328926] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:46.313 [2024-07-26 18:27:12.358589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.313 [2024-07-26 18:27:12.445040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.849 Running I/O for 10 seconds... 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:48.849 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:49.107 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:49.107 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:49.107 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:49.107 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:49.107 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.107 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:49.107 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.107 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:26:49.107 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:26:49.107 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1547314 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1547314 ']' 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1547314 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1547314 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1547314' 00:26:49.367 killing process with pid 1547314 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1547314 00:26:49.367 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1547314 00:26:49.367 Received shutdown signal, test time was about 0.900010 seconds 00:26:49.367 00:26:49.367 Latency(us) 00:26:49.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.367 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:49.367 Verification LBA range: start 0x0 length 0x400 00:26:49.367 Nvme1n1 : 0.86 224.40 14.02 0.00 0.00 280478.40 20583.16 236123.78 00:26:49.367 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:49.367 Verification LBA range: start 0x0 length 0x400 00:26:49.367 Nvme2n1 : 0.90 284.70 17.79 0.00 0.00 217453.99 18447.17 262532.36 00:26:49.367 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:49.367 Verification LBA range: start 0x0 length 0x400 00:26:49.367 Nvme3n1 : 0.85 224.93 14.06 0.00 0.00 268718.33 34758.35 237677.23 00:26:49.367 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:49.367 Verification LBA range: start 0x0 length 0x400 00:26:49.367 Nvme4n1 : 0.87 220.90 13.81 0.00 0.00 267638.71 30874.74 256318.58 00:26:49.367 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:49.367 Verification LBA range: start 0x0 length 0x400 00:26:49.368 Nvme5n1 : 0.88 218.88 13.68 0.00 0.00 263493.40 22136.60 254765.13 00:26:49.368 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:49.368 Verification LBA range: start 0x0 length 0x400 00:26:49.368 Nvme6n1 : 0.86 223.35 13.96 0.00 0.00 252473.90 20583.16 260978.92 00:26:49.368 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:49.368 Verification LBA range: start 0x0 length 0x400 00:26:49.368 Nvme7n1 : 0.88 218.66 13.67 0.00 0.00 252088.32 21845.33 257872.02 00:26:49.368 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:49.368 Verification LBA range: start 0x0 length 0x400 00:26:49.368 Nvme8n1 : 0.89 215.47 13.47 0.00 0.00 250669.13 29709.65 288940.94 00:26:49.368 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:49.368 Verification LBA range: start 0x0 length 0x400 00:26:49.368 Nvme9n1 : 0.89 214.63 13.41 0.00 0.00 245832.06 21456.97 298261.62 00:26:49.368 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:49.368 Verification LBA range: start 0x0 length 0x400 00:26:49.368 Nvme10n1 : 0.88 216.98 13.56 0.00 0.00 236797.35 17864.63 265639.25 00:26:49.368 =================================================================================================================== 00:26:49.368 Total : 2262.90 141.43 0.00 0.00 252399.51 17864.63 298261.62 00:26:49.628 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1547137 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.005 rmmod nvme_tcp 00:26:51.005 rmmod nvme_fabrics 00:26:51.005 rmmod nvme_keyring 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1547137 ']' 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1547137 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1547137 ']' 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1547137 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1547137 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1547137' 00:26:51.005 killing process with pid 1547137 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1547137 00:26:51.005 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1547137 00:26:51.264 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:51.264 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:51.264 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:51.264 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.264 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:51.264 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.264 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.264 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:53.804 00:26:53.804 real 0m8.192s 00:26:53.804 user 0m25.770s 00:26:53.804 sys 0m1.600s 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:53.804 ************************************ 00:26:53.804 END TEST nvmf_shutdown_tc2 00:26:53.804 ************************************ 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:53.804 ************************************ 00:26:53.804 START TEST nvmf_shutdown_tc3 00:26:53.804 ************************************ 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:53.804 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:53.804 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.804 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:53.805 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:53.805 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:53.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:26:53.805 00:26:53.805 --- 10.0.0.2 ping statistics --- 00:26:53.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.805 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:26:53.805 00:26:53.805 --- 10.0.0.1 ping statistics --- 00:26:53.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.805 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1548349 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1548349 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1548349 ']' 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:53.805 [2024-07-26 18:27:19.620498] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:53.805 [2024-07-26 18:27:19.620567] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.805 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.805 [2024-07-26 18:27:19.658000] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:53.805 [2024-07-26 18:27:19.688282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.805 [2024-07-26 18:27:19.779051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.805 [2024-07-26 18:27:19.779121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.805 [2024-07-26 18:27:19.779147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.805 [2024-07-26 18:27:19.779161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.805 [2024-07-26 18:27:19.779173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.805 [2024-07-26 18:27:19.779296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.805 [2024-07-26 18:27:19.779319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.805 [2024-07-26 18:27:19.779390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:53.805 [2024-07-26 18:27:19.779392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:53.805 [2024-07-26 18:27:19.932534] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:53.805 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:53.806 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:53.806 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:53.806 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:53.806 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:54.065 Malloc1 00:26:54.065 [2024-07-26 18:27:20.022184] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.065 Malloc2 00:26:54.065 Malloc3 00:26:54.065 Malloc4 00:26:54.065 Malloc5 00:26:54.323 Malloc6 00:26:54.323 Malloc7 00:26:54.323 Malloc8 00:26:54.323 Malloc9 00:26:54.323 Malloc10 00:26:54.323 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.323 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:54.323 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:54.323 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1548412 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1548412 /var/tmp/bdevperf.sock 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1548412 ']' 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:54.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.581 { 00:26:54.581 "params": { 00:26:54.581 "name": "Nvme$subsystem", 00:26:54.581 "trtype": "$TEST_TRANSPORT", 00:26:54.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.581 "adrfam": "ipv4", 00:26:54.581 "trsvcid": "$NVMF_PORT", 00:26:54.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.581 "hdgst": ${hdgst:-false}, 00:26:54.581 "ddgst": ${ddgst:-false} 00:26:54.581 }, 00:26:54.581 "method": "bdev_nvme_attach_controller" 00:26:54.581 } 00:26:54.581 EOF 00:26:54.581 )") 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.581 { 00:26:54.581 "params": { 00:26:54.581 "name": "Nvme$subsystem", 00:26:54.581 "trtype": "$TEST_TRANSPORT", 00:26:54.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.581 "adrfam": "ipv4", 00:26:54.581 "trsvcid": "$NVMF_PORT", 00:26:54.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.581 "hdgst": ${hdgst:-false}, 00:26:54.581 "ddgst": ${ddgst:-false} 00:26:54.581 }, 00:26:54.581 "method": "bdev_nvme_attach_controller" 00:26:54.581 } 00:26:54.581 EOF 00:26:54.581 )") 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.581 { 00:26:54.581 "params": { 00:26:54.581 "name": "Nvme$subsystem", 00:26:54.581 "trtype": "$TEST_TRANSPORT", 00:26:54.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.581 "adrfam": "ipv4", 00:26:54.581 "trsvcid": "$NVMF_PORT", 00:26:54.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.581 "hdgst": ${hdgst:-false}, 00:26:54.581 "ddgst": ${ddgst:-false} 00:26:54.581 }, 00:26:54.581 "method": "bdev_nvme_attach_controller" 00:26:54.581 } 00:26:54.581 EOF 00:26:54.581 )") 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.581 { 00:26:54.581 "params": { 00:26:54.581 "name": "Nvme$subsystem", 00:26:54.581 "trtype": "$TEST_TRANSPORT", 00:26:54.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.581 "adrfam": "ipv4", 00:26:54.581 "trsvcid": "$NVMF_PORT", 00:26:54.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.581 "hdgst": ${hdgst:-false}, 00:26:54.581 "ddgst": ${ddgst:-false} 00:26:54.581 }, 00:26:54.581 "method": "bdev_nvme_attach_controller" 00:26:54.581 } 00:26:54.581 EOF 00:26:54.581 )") 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.581 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.581 { 00:26:54.581 "params": { 00:26:54.581 "name": "Nvme$subsystem", 00:26:54.581 "trtype": "$TEST_TRANSPORT", 00:26:54.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.581 "adrfam": "ipv4", 00:26:54.581 "trsvcid": "$NVMF_PORT", 00:26:54.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.581 "hdgst": ${hdgst:-false}, 00:26:54.581 "ddgst": ${ddgst:-false} 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 } 00:26:54.582 EOF 00:26:54.582 )") 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.582 { 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme$subsystem", 00:26:54.582 "trtype": "$TEST_TRANSPORT", 00:26:54.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "$NVMF_PORT", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.582 "hdgst": ${hdgst:-false}, 00:26:54.582 "ddgst": ${ddgst:-false} 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 } 00:26:54.582 EOF 00:26:54.582 )") 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.582 { 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme$subsystem", 00:26:54.582 "trtype": "$TEST_TRANSPORT", 00:26:54.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "$NVMF_PORT", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.582 "hdgst": ${hdgst:-false}, 00:26:54.582 "ddgst": ${ddgst:-false} 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 } 00:26:54.582 EOF 00:26:54.582 )") 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.582 { 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme$subsystem", 00:26:54.582 "trtype": "$TEST_TRANSPORT", 00:26:54.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "$NVMF_PORT", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.582 "hdgst": ${hdgst:-false}, 00:26:54.582 "ddgst": ${ddgst:-false} 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 } 00:26:54.582 EOF 00:26:54.582 )") 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.582 { 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme$subsystem", 00:26:54.582 "trtype": "$TEST_TRANSPORT", 00:26:54.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "$NVMF_PORT", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.582 "hdgst": ${hdgst:-false}, 00:26:54.582 "ddgst": ${ddgst:-false} 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 } 00:26:54.582 EOF 00:26:54.582 )") 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.582 { 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme$subsystem", 00:26:54.582 "trtype": "$TEST_TRANSPORT", 00:26:54.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "$NVMF_PORT", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.582 "hdgst": ${hdgst:-false}, 00:26:54.582 "ddgst": ${ddgst:-false} 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 } 00:26:54.582 EOF 00:26:54.582 )") 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:26:54.582 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme1", 00:26:54.582 "trtype": "tcp", 00:26:54.582 "traddr": "10.0.0.2", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "4420", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:54.582 "hdgst": false, 00:26:54.582 "ddgst": false 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 },{ 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme2", 00:26:54.582 "trtype": "tcp", 00:26:54.582 "traddr": "10.0.0.2", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "4420", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:54.582 "hdgst": false, 00:26:54.582 "ddgst": false 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 },{ 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme3", 00:26:54.582 "trtype": "tcp", 00:26:54.582 "traddr": "10.0.0.2", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "4420", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:54.582 "hdgst": false, 00:26:54.582 "ddgst": false 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 },{ 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme4", 00:26:54.582 "trtype": "tcp", 00:26:54.582 "traddr": "10.0.0.2", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "4420", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:54.582 "hdgst": false, 00:26:54.582 "ddgst": false 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 },{ 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme5", 00:26:54.582 "trtype": "tcp", 00:26:54.582 "traddr": "10.0.0.2", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "4420", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:54.582 "hdgst": false, 00:26:54.582 "ddgst": false 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 },{ 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme6", 00:26:54.582 "trtype": "tcp", 00:26:54.582 "traddr": "10.0.0.2", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "4420", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:54.582 "hdgst": false, 00:26:54.582 "ddgst": false 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 },{ 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme7", 00:26:54.582 "trtype": "tcp", 00:26:54.582 "traddr": "10.0.0.2", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "4420", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:54.582 "hdgst": false, 00:26:54.582 "ddgst": false 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 },{ 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme8", 00:26:54.582 "trtype": "tcp", 00:26:54.582 "traddr": "10.0.0.2", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "4420", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:54.582 "hdgst": false, 00:26:54.582 "ddgst": false 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 },{ 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme9", 00:26:54.582 "trtype": "tcp", 00:26:54.582 "traddr": "10.0.0.2", 00:26:54.582 "adrfam": "ipv4", 00:26:54.582 "trsvcid": "4420", 00:26:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:54.582 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:54.582 "hdgst": false, 00:26:54.582 "ddgst": false 00:26:54.582 }, 00:26:54.582 "method": "bdev_nvme_attach_controller" 00:26:54.582 },{ 00:26:54.582 "params": { 00:26:54.582 "name": "Nvme10", 00:26:54.582 "trtype": "tcp", 00:26:54.582 "traddr": "10.0.0.2", 00:26:54.582 "adrfam": "ipv4", 00:26:54.583 "trsvcid": "4420", 00:26:54.583 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:54.583 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:54.583 "hdgst": false, 00:26:54.583 "ddgst": false 00:26:54.583 }, 00:26:54.583 "method": "bdev_nvme_attach_controller" 00:26:54.583 }' 00:26:54.583 [2024-07-26 18:27:20.524007] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:54.583 [2024-07-26 18:27:20.524105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548412 ] 00:26:54.583 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.583 [2024-07-26 18:27:20.560953] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:54.583 [2024-07-26 18:27:20.590856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.583 [2024-07-26 18:27:20.678884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.959 Running I/O for 10 seconds... 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1548349 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1548349 ']' 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1548349 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1548349 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:56.539 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:56.540 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1548349' 00:26:56.540 killing process with pid 1548349 00:26:56.540 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1548349 00:26:56.540 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1548349 00:26:56.540 [2024-07-26 18:27:22.599834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.599913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.599948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.599962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.599974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.599987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.600762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581af0 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.606990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.540 [2024-07-26 18:27:22.607325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.607603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584610 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.609758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2581fb0 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.541 [2024-07-26 18:27:22.611944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.611956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.611973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.611986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.611999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.612259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582470 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.613988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.614322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582950 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.542 [2024-07-26 18:27:22.615449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.615938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582e10 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.617976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.543 [2024-07-26 18:27:22.618019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 [2024-07-26 18:27:22.618046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.543 [2024-07-26 18:27:22.618068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 [2024-07-26 18:27:22.618085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.543 [2024-07-26 18:27:22.618101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 [2024-07-26 18:27:22.618116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.543 [2024-07-26 18:27:22.618130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 [2024-07-26 18:27:22.618144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ac80 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.543 [2024-07-26 18:27:22.618224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 [2024-07-26 18:27:22.618240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.543 [2024-07-26 18:27:22.618254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 [2024-07-26 18:27:22.618268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.543 [2024-07-26 18:27:22.618282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 [2024-07-26 18:27:22.618301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.543 [2024-07-26 18:27:22.618316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 [2024-07-26 18:27:22.618329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de300 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with [2024-07-26 18:27:22.618376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.543 the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 [2024-07-26 18:27:22.618412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.543 [2024-07-26 18:27:22.618428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 [2024-07-26 18:27:22.618442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.543 [2024-07-26 18:27:22.618455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 [2024-07-26 18:27:22.618469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-26 18:27:22.618482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:56.543 the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-26 18:27:22.618498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x256b070 is same [2024-07-26 18:27:22.618514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with with the state(5) to be set 00:26:56.543 the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-26 18:27:22.618567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:56.543 the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 [2024-07-26 18:27:22.618601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.543 [2024-07-26 18:27:22.618614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.543 [2024-07-26 18:27:22.618621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.543 [2024-07-26 18:27:22.618627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.618641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.618655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.618668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.618682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea7610 is same [2024-07-26 18:27:22.618696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with with the state(5) to be set 00:26:56.544 the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-26 18:27:22.618749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:56.544 the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.618776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.618789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with [2024-07-26 18:27:22.618795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:26:56.544 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.618809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with [2024-07-26 18:27:22.618811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:26:56.544 id:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.618825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with [2024-07-26 18:27:22.618826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:26:56.544 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.618841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with [2024-07-26 18:27:22.618842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:26:56.544 id:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.618856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with [2024-07-26 18:27:22.618857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:26:56.544 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.618871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with [2024-07-26 18:27:22.618872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257d0b0 is same the state(5) to be set 00:26:56.544 with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-26 18:27:22.618926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:56.544 the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.618954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with [2024-07-26 18:27:22.618959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:26:56.544 id:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.618973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with [2024-07-26 18:27:22.618975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:26:56.544 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.618988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.618991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.619002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.619015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.619029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.619054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with [2024-07-26 18:27:22.619056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4ce0 is same the state(5) to be set 00:26:56.544 with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.619104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.619142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.619157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.619170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.619184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.619210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.619223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.619237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1f10 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.619312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.619325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837b0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.619334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.619356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.619370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.619383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.619397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.544 [2024-07-26 18:27:22.619420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.619433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257dad0 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.620392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.544 [2024-07-26 18:27:22.620419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.620444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.544 [2024-07-26 18:27:22.620459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.620476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.544 [2024-07-26 18:27:22.620491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.620506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.544 [2024-07-26 18:27:22.620522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.620537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.544 [2024-07-26 18:27:22.620551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.620567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.544 [2024-07-26 18:27:22.620588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.620593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584130 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.620605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.544 [2024-07-26 18:27:22.620620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 18:27:22.620620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584130 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.620637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584130 is same with [2024-07-26 18:27:22.620639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:1the state(5) to be set 00:26:56.544 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.544 [2024-07-26 18:27:22.620653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584130 is same with [2024-07-26 18:27:22.620654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:56.544 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.544 [2024-07-26 18:27:22.620667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584130 is same with the state(5) to be set 00:26:56.544 [2024-07-26 18:27:22.620672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.620686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.620702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.620722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.620738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.620752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.620768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.620782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.620797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.620811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.620827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.620840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.620856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.620870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.620885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.620899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.620919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.620933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.620948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.620962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.620977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.620991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.621973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.621987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.622002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.622016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.622031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.622067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.622090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.622105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.622122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.622136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.622153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.622167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.622183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.622198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.622214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.622227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.622243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.622258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.622273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.622287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.545 [2024-07-26 18:27:22.622303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.545 [2024-07-26 18:27:22.622318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.622334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.622350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.622382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.622396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.622420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.622434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.622475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:56.546 [2024-07-26 18:27:22.622551] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23adb90 was disconnected and freed. reset controller. 00:26:56.546 [2024-07-26 18:27:22.623361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.623982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.623995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.624012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.624026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.624053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.624082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.624100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.624114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.624130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.624144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.624160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.624174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.624190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.546 [2024-07-26 18:27:22.624204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.546 [2024-07-26 18:27:22.624225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.547 [2024-07-26 18:27:22.624931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.547 [2024-07-26 18:27:22.624947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.624961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.624977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.624991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.625026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.625072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.625105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.625137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.625168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.625199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.625229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.625260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.625291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.625322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.625351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.625388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.625422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.625516] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2460340 was disconnected and freed. reset controller. 00:26:56.548 [2024-07-26 18:27:22.626919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.626945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.626969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.626986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.627980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.627995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.628012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.628026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.628043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.628074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.628092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.628108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.628125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.548 [2024-07-26 18:27:22.628142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.548 [2024-07-26 18:27:22.628159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.628980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.628994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629097] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2e86d70 was disconnected and freed. reset controller. 00:26:56.549 [2024-07-26 18:27:22.629145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.549 [2024-07-26 18:27:22.629941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.549 [2024-07-26 18:27:22.629958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.629972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.629989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.630975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.630989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.631005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.631019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.631035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.631067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.631085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.631099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.631115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.631129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.631145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.631159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.631175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.550 [2024-07-26 18:27:22.631190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.631272] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24cb260 was disconnected and freed. reset controller. 00:26:56.550 [2024-07-26 18:27:22.631405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:56.550 [2024-07-26 18:27:22.631462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea7610 (9): Bad file descriptor 00:26:56.550 [2024-07-26 18:27:22.631551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.550 [2024-07-26 18:27:22.631574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.631590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.550 [2024-07-26 18:27:22.631604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.631618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.550 [2024-07-26 18:27:22.631632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.631646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.550 [2024-07-26 18:27:22.631660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.550 [2024-07-26 18:27:22.631673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bab0 is same with the state(5) to be set 00:26:56.550 [2024-07-26 18:27:22.631699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254ac80 (9): Bad file descriptor 00:26:56.550 [2024-07-26 18:27:22.631733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23de300 (9): Bad file descriptor 00:26:56.550 [2024-07-26 18:27:22.631764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x256b070 (9): Bad file descriptor 00:26:56.551 [2024-07-26 18:27:22.631796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257d0b0 (9): Bad file descriptor 00:26:56.551 [2024-07-26 18:27:22.631829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4ce0 (9): Bad file descriptor 00:26:56.551 [2024-07-26 18:27:22.631866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b1f10 (9): Bad file descriptor 00:26:56.551 [2024-07-26 18:27:22.631918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.551 [2024-07-26 18:27:22.631939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.631955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.551 [2024-07-26 18:27:22.631970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.631984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.551 [2024-07-26 18:27:22.631998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.632012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.551 [2024-07-26 18:27:22.632026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.632039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2544d50 is same with the state(5) to be set 00:26:56.551 [2024-07-26 18:27:22.632078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257dad0 (9): Bad file descriptor 00:26:56.551 [2024-07-26 18:27:22.635923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:56.551 [2024-07-26 18:27:22.635970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:56.551 [2024-07-26 18:27:22.635996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257bab0 (9): Bad file descriptor 00:26:56.551 [2024-07-26 18:27:22.636721] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:56.551 [2024-07-26 18:27:22.636765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:56.551 [2024-07-26 18:27:22.636795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2544d50 (9): Bad file descriptor 00:26:56.551 [2024-07-26 18:27:22.636988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.551 [2024-07-26 18:27:22.637019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea7610 with addr=10.0.0.2, port=4420 00:26:56.551 [2024-07-26 18:27:22.637039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea7610 is same with the state(5) to be set 00:26:56.551 [2024-07-26 18:27:22.637200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.551 [2024-07-26 18:27:22.637229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x257dad0 with addr=10.0.0.2, port=4420 00:26:56.551 [2024-07-26 18:27:22.637246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257dad0 is same with the state(5) to be set 00:26:56.551 [2024-07-26 18:27:22.637323] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:56.551 [2024-07-26 18:27:22.637651] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:56.551 [2024-07-26 18:27:22.637744] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:56.551 [2024-07-26 18:27:22.637810] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:56.551 [2024-07-26 18:27:22.637895] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:56.551 [2024-07-26 18:27:22.638605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.551 [2024-07-26 18:27:22.638646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x257bab0 with addr=10.0.0.2, port=4420 00:26:56.551 [2024-07-26 18:27:22.638674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bab0 is same with the state(5) to be set 00:26:56.551 [2024-07-26 18:27:22.638706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea7610 (9): Bad file descriptor 00:26:56.551 [2024-07-26 18:27:22.638730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257dad0 (9): Bad file descriptor 00:26:56.551 [2024-07-26 18:27:22.638982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.551 [2024-07-26 18:27:22.639012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2544d50 with addr=10.0.0.2, port=4420 00:26:56.551 [2024-07-26 18:27:22.639029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2544d50 is same with the state(5) to be set 00:26:56.551 [2024-07-26 18:27:22.639067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257bab0 (9): Bad file descriptor 00:26:56.551 [2024-07-26 18:27:22.639088] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:56.551 [2024-07-26 18:27:22.639103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:56.551 [2024-07-26 18:27:22.639119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:56.551 [2024-07-26 18:27:22.639142] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:56.551 [2024-07-26 18:27:22.639158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:56.551 [2024-07-26 18:27:22.639172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:56.551 [2024-07-26 18:27:22.639251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.551 [2024-07-26 18:27:22.639273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.551 [2024-07-26 18:27:22.639290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2544d50 (9): Bad file descriptor 00:26:56.551 [2024-07-26 18:27:22.639307] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:56.551 [2024-07-26 18:27:22.639321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:56.551 [2024-07-26 18:27:22.639334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:56.551 [2024-07-26 18:27:22.639397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.551 [2024-07-26 18:27:22.639417] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:56.551 [2024-07-26 18:27:22.639430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:56.551 [2024-07-26 18:27:22.639444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:56.551 [2024-07-26 18:27:22.639496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.551 [2024-07-26 18:27:22.641583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.641611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.641645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.641662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.641679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.641700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.641717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.641732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.641748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.641763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.641779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.641793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.641809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.641823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.641840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.641854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.641871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.641885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.641901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.641916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.641933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.641948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.641965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.641979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.641995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.551 [2024-07-26 18:27:22.642482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.551 [2024-07-26 18:27:22.642500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.642984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.642999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.643677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.643693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3290 is same with the state(5) to be set 00:26:56.552 [2024-07-26 18:27:22.644987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.552 [2024-07-26 18:27:22.645483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.552 [2024-07-26 18:27:22.645499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.645983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.645997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.646979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.646995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.647013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.647028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24616b0 is same with the state(5) to be set 00:26:56.553 [2024-07-26 18:27:22.648297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.648323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.553 [2024-07-26 18:27:22.648356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.553 [2024-07-26 18:27:22.648371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.648973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.648990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.554 [2024-07-26 18:27:22.649895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.554 [2024-07-26 18:27:22.649911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.649925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.649942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.649956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.649973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.649986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.650003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.650018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.650034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.650064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.650082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.650097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.650113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.650127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.650144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.650158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.650174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.650188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.650204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.650217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.650234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.650247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.650264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.650282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.650298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.650313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.650329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.650352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.650367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab290 is same with the state(5) to be set 00:26:56.555 [2024-07-26 18:27:22.651611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.651636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.651661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.651677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.651694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.651708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.651724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.651739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.651756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.651770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.651786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.651799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.651816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.651831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.651847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.651861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.651877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.651891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.651908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.651923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.651944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.651958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.651975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.651989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.555 [2024-07-26 18:27:22.652902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.555 [2024-07-26 18:27:22.652917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.652933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.652947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.652963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.652980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.652996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.653611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.653626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ac700 is same with the state(5) to be set 00:26:56.556 [2024-07-26 18:27:22.654861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.654886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.654910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.654926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.654943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.654957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.654974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.654989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.556 [2024-07-26 18:27:22.655889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.556 [2024-07-26 18:27:22.655903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.655927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.655942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.655959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.655973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.655989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.656864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.656879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2cdf300 is same with the state(5) to be set 00:26:56.557 [2024-07-26 18:27:22.658525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.658971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.658987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.659001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.557 [2024-07-26 18:27:22.659017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.557 [2024-07-26 18:27:22.659031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.564 [2024-07-26 18:27:22.659048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.564 [2024-07-26 18:27:22.659069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.564 [2024-07-26 18:27:22.659087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.564 [2024-07-26 18:27:22.659101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.659980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.659997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.565 [2024-07-26 18:27:22.660548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.565 [2024-07-26 18:27:22.660563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc760 is same with the state(5) to be set 00:26:56.565 [2024-07-26 18:27:22.662644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.565 [2024-07-26 18:27:22.662689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:56.565 [2024-07-26 18:27:22.662710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:56.565 [2024-07-26 18:27:22.662727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:56.565 [2024-07-26 18:27:22.662851] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:56.565 [2024-07-26 18:27:22.662883] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:56.565 [2024-07-26 18:27:22.662980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:56.824 task offset: 17536 on job bdev=Nvme6n1 fails 00:26:56.824 00:26:56.824 Latency(us) 00:26:56.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.824 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:56.824 Job: Nvme1n1 ended in about 0.78 seconds with error 00:26:56.824 Verification LBA range: start 0x0 length 0x400 00:26:56.824 Nvme1n1 : 0.78 163.59 10.22 81.79 0.00 257461.16 18155.90 250104.79 00:26:56.824 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:56.824 Job: Nvme2n1 ended in about 0.77 seconds with error 00:26:56.824 Verification LBA range: start 0x0 length 0x400 00:26:56.824 Nvme2n1 : 0.77 166.03 10.38 83.01 0.00 247517.49 12815.93 234570.33 00:26:56.824 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:56.824 Job: Nvme3n1 ended in about 0.79 seconds with error 00:26:56.824 Verification LBA range: start 0x0 length 0x400 00:26:56.824 Nvme3n1 : 0.79 162.90 10.18 81.45 0.00 246386.03 17087.91 251658.24 00:26:56.824 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:56.824 Job: Nvme4n1 ended in about 0.79 seconds with error 00:26:56.824 Verification LBA range: start 0x0 length 0x400 00:26:56.824 Nvme4n1 : 0.79 162.22 10.14 81.11 0.00 241427.85 19126.80 228356.55 00:26:56.824 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:56.824 Job: Nvme5n1 ended in about 0.79 seconds with error 00:26:56.824 Verification LBA range: start 0x0 length 0x400 00:26:56.824 Nvme5n1 : 0.79 161.55 10.10 80.77 0.00 236377.88 21942.42 250104.79 00:26:56.824 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:56.824 Job: Nvme6n1 ended in about 0.76 seconds with error 00:26:56.824 Verification LBA range: start 0x0 length 0x400 00:26:56.824 Nvme6n1 : 0.76 167.44 10.47 83.72 0.00 221032.49 5655.51 256318.58 00:26:56.824 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:56.824 Job: Nvme7n1 ended in about 0.80 seconds with error 00:26:56.824 Verification LBA range: start 0x0 length 0x400 00:26:56.824 Nvme7n1 : 0.80 160.89 10.06 80.44 0.00 225325.76 20097.71 254765.13 00:26:56.824 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:56.824 Job: Nvme8n1 ended in about 0.77 seconds with error 00:26:56.824 Verification LBA range: start 0x0 length 0x400 00:26:56.824 Nvme8n1 : 0.77 165.72 10.36 82.86 0.00 211705.43 10922.67 251658.24 00:26:56.824 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:56.824 Job: Nvme9n1 ended in about 0.77 seconds with error 00:26:56.824 Verification LBA range: start 0x0 length 0x400 00:26:56.824 Nvme9n1 : 0.77 165.49 10.34 82.75 0.00 206310.46 17087.91 264085.81 00:26:56.824 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:56.824 Job: Nvme10n1 ended in about 0.80 seconds with error 00:26:56.824 Verification LBA range: start 0x0 length 0x400 00:26:56.824 Nvme10n1 : 0.80 80.08 5.00 80.08 0.00 313945.13 23495.87 293601.28 00:26:56.824 =================================================================================================================== 00:26:56.824 Total : 1555.90 97.24 817.99 0.00 238224.96 5655.51 293601.28 00:26:56.824 [2024-07-26 18:27:22.690254] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:56.824 [2024-07-26 18:27:22.690333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:56.824 [2024-07-26 18:27:22.690683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.824 [2024-07-26 18:27:22.690730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b1f10 with addr=10.0.0.2, port=4420 00:26:56.824 [2024-07-26 18:27:22.690752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1f10 is same with the state(5) to be set 00:26:56.824 [2024-07-26 18:27:22.690915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.824 [2024-07-26 18:27:22.690944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23de300 with addr=10.0.0.2, port=4420 00:26:56.824 [2024-07-26 18:27:22.690962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de300 is same with the state(5) to be set 00:26:56.824 [2024-07-26 18:27:22.691117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.824 [2024-07-26 18:27:22.691157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d4ce0 with addr=10.0.0.2, port=4420 00:26:56.825 [2024-07-26 18:27:22.691175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4ce0 is same with the state(5) to be set 00:26:56.825 [2024-07-26 18:27:22.691305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.825 [2024-07-26 18:27:22.691331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x256b070 with addr=10.0.0.2, port=4420 00:26:56.825 [2024-07-26 18:27:22.691358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x256b070 is same with the state(5) to be set 00:26:56.825 [2024-07-26 18:27:22.692940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:56.825 [2024-07-26 18:27:22.692971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:56.825 [2024-07-26 18:27:22.692991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:56.825 [2024-07-26 18:27:22.693007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:56.825 [2024-07-26 18:27:22.693200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.825 [2024-07-26 18:27:22.693229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x257d0b0 with addr=10.0.0.2, port=4420 00:26:56.825 [2024-07-26 18:27:22.693247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257d0b0 is same with the state(5) to be set 00:26:56.825 [2024-07-26 18:27:22.693378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.825 [2024-07-26 18:27:22.693405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254ac80 with addr=10.0.0.2, port=4420 00:26:56.825 [2024-07-26 18:27:22.693421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ac80 is same with the state(5) to be set 00:26:56.825 [2024-07-26 18:27:22.693447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b1f10 (9): Bad file descriptor 00:26:56.825 [2024-07-26 18:27:22.693470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23de300 (9): Bad file descriptor 00:26:56.825 [2024-07-26 18:27:22.693488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4ce0 (9): Bad file descriptor 00:26:56.825 [2024-07-26 18:27:22.693517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x256b070 (9): Bad file descriptor 00:26:56.825 [2024-07-26 18:27:22.693573] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:56.825 [2024-07-26 18:27:22.693597] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:56.825 [2024-07-26 18:27:22.693620] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:56.825 [2024-07-26 18:27:22.693642] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:56.825 [2024-07-26 18:27:22.693859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.825 [2024-07-26 18:27:22.693889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x257dad0 with addr=10.0.0.2, port=4420 00:26:56.825 [2024-07-26 18:27:22.693906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257dad0 is same with the state(5) to be set 00:26:56.825 [2024-07-26 18:27:22.694036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.825 [2024-07-26 18:27:22.694069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea7610 with addr=10.0.0.2, port=4420 00:26:56.825 [2024-07-26 18:27:22.694088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea7610 is same with the state(5) to be set 00:26:56.825 [2024-07-26 18:27:22.694222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.825 [2024-07-26 18:27:22.694254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x257bab0 with addr=10.0.0.2, port=4420 00:26:56.825 [2024-07-26 18:27:22.694271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bab0 is same with the state(5) to be set 00:26:56.825 [2024-07-26 18:27:22.694403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.825 [2024-07-26 18:27:22.694429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2544d50 with addr=10.0.0.2, port=4420 00:26:56.825 [2024-07-26 18:27:22.694446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2544d50 is same with the state(5) to be set 00:26:56.825 [2024-07-26 18:27:22.694464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257d0b0 (9): Bad file descriptor 00:26:56.825 [2024-07-26 18:27:22.694484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254ac80 (9): Bad file descriptor 00:26:56.825 [2024-07-26 18:27:22.694501] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.825 [2024-07-26 18:27:22.694515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.825 [2024-07-26 18:27:22.694530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.825 [2024-07-26 18:27:22.694551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:56.825 [2024-07-26 18:27:22.694566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:56.825 [2024-07-26 18:27:22.694580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:56.825 [2024-07-26 18:27:22.694597] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:56.825 [2024-07-26 18:27:22.694611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:56.825 [2024-07-26 18:27:22.694625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:56.825 [2024-07-26 18:27:22.694642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:56.825 [2024-07-26 18:27:22.694656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:56.825 [2024-07-26 18:27:22.694670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:56.825 [2024-07-26 18:27:22.694770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.825 [2024-07-26 18:27:22.694792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.825 [2024-07-26 18:27:22.694805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.825 [2024-07-26 18:27:22.694818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.825 [2024-07-26 18:27:22.694834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257dad0 (9): Bad file descriptor 00:26:56.825 [2024-07-26 18:27:22.694854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea7610 (9): Bad file descriptor 00:26:56.825 [2024-07-26 18:27:22.694873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257bab0 (9): Bad file descriptor 00:26:56.825 [2024-07-26 18:27:22.694891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2544d50 (9): Bad file descriptor 00:26:56.825 [2024-07-26 18:27:22.694907] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:56.825 [2024-07-26 18:27:22.694921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:56.825 [2024-07-26 18:27:22.694934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:56.825 [2024-07-26 18:27:22.694956] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:56.825 [2024-07-26 18:27:22.694971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:56.825 [2024-07-26 18:27:22.694986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:56.825 [2024-07-26 18:27:22.695046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.825 [2024-07-26 18:27:22.695075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.825 [2024-07-26 18:27:22.695091] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:56.825 [2024-07-26 18:27:22.695104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:56.825 [2024-07-26 18:27:22.695118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:56.825 [2024-07-26 18:27:22.695135] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:56.825 [2024-07-26 18:27:22.695150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:56.825 [2024-07-26 18:27:22.695164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:56.825 [2024-07-26 18:27:22.695180] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:56.825 [2024-07-26 18:27:22.695194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:56.825 [2024-07-26 18:27:22.695207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:56.825 [2024-07-26 18:27:22.695223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:56.825 [2024-07-26 18:27:22.695237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:56.825 [2024-07-26 18:27:22.695251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:56.825 [2024-07-26 18:27:22.695289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.825 [2024-07-26 18:27:22.695308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.825 [2024-07-26 18:27:22.695321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.825 [2024-07-26 18:27:22.695333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.084 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:26:57.084 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:26:58.020 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1548412 00:26:58.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1548412) - No such process 00:26:58.020 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:26:58.020 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:26:58.020 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:58.020 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:58.020 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:58.020 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:58.020 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:58.020 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:26:58.020 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:58.020 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:26:58.020 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:58.020 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:58.020 rmmod nvme_tcp 00:26:58.020 rmmod nvme_fabrics 00:26:58.282 rmmod nvme_keyring 00:26:58.282 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:58.282 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:26:58.282 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:26:58.282 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:58.282 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:58.282 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:58.282 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:58.282 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:58.282 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:58.282 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.282 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.282 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.225 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:00.225 00:27:00.225 real 0m6.842s 00:27:00.225 user 0m15.148s 00:27:00.225 sys 0m1.359s 00:27:00.225 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:00.225 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:00.225 ************************************ 00:27:00.225 END TEST nvmf_shutdown_tc3 00:27:00.225 ************************************ 00:27:00.225 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:00.225 00:27:00.225 real 0m26.974s 00:27:00.225 user 1m15.714s 00:27:00.225 sys 0m6.219s 00:27:00.225 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:00.225 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:00.225 ************************************ 00:27:00.225 END TEST nvmf_shutdown 00:27:00.225 ************************************ 00:27:00.225 18:27:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:27:00.225 00:27:00.225 real 16m46.175s 00:27:00.225 user 47m9.345s 00:27:00.225 sys 3m55.268s 00:27:00.225 18:27:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:00.225 18:27:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:00.225 ************************************ 00:27:00.225 END TEST nvmf_target_extra 00:27:00.225 ************************************ 00:27:00.225 18:27:26 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:00.225 18:27:26 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:00.225 18:27:26 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:00.225 18:27:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:00.225 ************************************ 00:27:00.225 START TEST nvmf_host 00:27:00.225 ************************************ 00:27:00.225 18:27:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:00.225 * Looking for test storage... 00:27:00.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:00.225 18:27:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.225 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:00.225 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.225 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.225 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.225 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.225 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.225 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.225 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.225 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.225 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.225 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.485 ************************************ 00:27:00.485 START TEST nvmf_multicontroller 00:27:00.485 ************************************ 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:00.485 * Looking for test storage... 00:27:00.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:00.485 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.486 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:00.486 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:00.486 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:00.486 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.486 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.486 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.486 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:00.486 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:00.486 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:00.486 18:27:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:02.388 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:02.388 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:02.388 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:02.388 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:02.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:27:02.388 00:27:02.388 --- 10.0.0.2 ping statistics --- 00:27:02.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.388 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:27:02.388 00:27:02.388 --- 10.0.0.1 ping statistics --- 00:27:02.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.388 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:02.388 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:02.646 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:02.646 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:02.646 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:02.646 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.646 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1550902 00:27:02.646 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:02.646 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1550902 00:27:02.646 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1550902 ']' 00:27:02.646 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.646 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:02.646 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.646 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:02.646 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.646 [2024-07-26 18:27:28.592623] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:02.646 [2024-07-26 18:27:28.592706] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.646 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.646 [2024-07-26 18:27:28.631221] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:02.646 [2024-07-26 18:27:28.659381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:02.646 [2024-07-26 18:27:28.754156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.646 [2024-07-26 18:27:28.754206] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.646 [2024-07-26 18:27:28.754230] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.646 [2024-07-26 18:27:28.754241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.646 [2024-07-26 18:27:28.754252] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.646 [2024-07-26 18:27:28.754300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:02.646 [2024-07-26 18:27:28.754429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:02.646 [2024-07-26 18:27:28.754433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.905 [2024-07-26 18:27:28.885219] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.905 Malloc0 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.905 [2024-07-26 18:27:28.954422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.905 [2024-07-26 18:27:28.962259] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.905 Malloc1 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.905 18:27:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.905 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.905 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:02.905 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1550985 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1550985 /var/tmp/bdevperf.sock 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1550985 ']' 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:02.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:02.906 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:03.471 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:03.472 NVMe0n1 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.472 1 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:03.472 request: 00:27:03.472 { 00:27:03.472 "name": "NVMe0", 00:27:03.472 "trtype": "tcp", 00:27:03.472 "traddr": "10.0.0.2", 00:27:03.472 "adrfam": "ipv4", 00:27:03.472 "trsvcid": "4420", 00:27:03.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:03.472 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:03.472 "hostaddr": "10.0.0.2", 00:27:03.472 "hostsvcid": "60000", 00:27:03.472 "prchk_reftag": false, 00:27:03.472 "prchk_guard": false, 00:27:03.472 "hdgst": false, 00:27:03.472 "ddgst": false, 00:27:03.472 "method": "bdev_nvme_attach_controller", 00:27:03.472 "req_id": 1 00:27:03.472 } 00:27:03.472 Got JSON-RPC error response 00:27:03.472 response: 00:27:03.472 { 00:27:03.472 "code": -114, 00:27:03.472 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:03.472 } 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:03.472 request: 00:27:03.472 { 00:27:03.472 "name": "NVMe0", 00:27:03.472 "trtype": "tcp", 00:27:03.472 "traddr": "10.0.0.2", 00:27:03.472 "adrfam": "ipv4", 00:27:03.472 "trsvcid": "4420", 00:27:03.472 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:03.472 "hostaddr": "10.0.0.2", 00:27:03.472 "hostsvcid": "60000", 00:27:03.472 "prchk_reftag": false, 00:27:03.472 "prchk_guard": false, 00:27:03.472 "hdgst": false, 00:27:03.472 "ddgst": false, 00:27:03.472 "method": "bdev_nvme_attach_controller", 00:27:03.472 "req_id": 1 00:27:03.472 } 00:27:03.472 Got JSON-RPC error response 00:27:03.472 response: 00:27:03.472 { 00:27:03.472 "code": -114, 00:27:03.472 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:03.472 } 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:03.472 request: 00:27:03.472 { 00:27:03.472 "name": "NVMe0", 00:27:03.472 "trtype": "tcp", 00:27:03.472 "traddr": "10.0.0.2", 00:27:03.472 "adrfam": "ipv4", 00:27:03.472 "trsvcid": "4420", 00:27:03.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:03.472 "hostaddr": "10.0.0.2", 00:27:03.472 "hostsvcid": "60000", 00:27:03.472 "prchk_reftag": false, 00:27:03.472 "prchk_guard": false, 00:27:03.472 "hdgst": false, 00:27:03.472 "ddgst": false, 00:27:03.472 "multipath": "disable", 00:27:03.472 "method": "bdev_nvme_attach_controller", 00:27:03.472 "req_id": 1 00:27:03.472 } 00:27:03.472 Got JSON-RPC error response 00:27:03.472 response: 00:27:03.472 { 00:27:03.472 "code": -114, 00:27:03.472 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:03.472 } 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:03.472 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:03.730 request: 00:27:03.730 { 00:27:03.730 "name": "NVMe0", 00:27:03.730 "trtype": "tcp", 00:27:03.730 "traddr": "10.0.0.2", 00:27:03.730 "adrfam": "ipv4", 00:27:03.730 "trsvcid": "4420", 00:27:03.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:03.730 "hostaddr": "10.0.0.2", 00:27:03.730 "hostsvcid": "60000", 00:27:03.730 "prchk_reftag": false, 00:27:03.730 "prchk_guard": false, 00:27:03.730 "hdgst": false, 00:27:03.730 "ddgst": false, 00:27:03.730 "multipath": "failover", 00:27:03.730 "method": "bdev_nvme_attach_controller", 00:27:03.730 "req_id": 1 00:27:03.730 } 00:27:03.730 Got JSON-RPC error response 00:27:03.730 response: 00:27:03.730 { 00:27:03.730 "code": -114, 00:27:03.730 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:03.730 } 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:03.730 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:03.730 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:03.730 18:27:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:05.103 0 00:27:05.103 18:27:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:05.103 18:27:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.103 18:27:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:05.103 18:27:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.103 18:27:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1550985 00:27:05.103 18:27:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1550985 ']' 00:27:05.103 18:27:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1550985 00:27:05.103 18:27:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:05.103 18:27:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:05.103 18:27:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1550985 00:27:05.103 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:05.104 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:05.104 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1550985' 00:27:05.104 killing process with pid 1550985 00:27:05.104 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1550985 00:27:05.104 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1550985 00:27:05.104 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.104 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.104 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:05.104 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.104 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:05.104 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.104 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:27:05.362 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:05.362 [2024-07-26 18:27:29.067957] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:05.362 [2024-07-26 18:27:29.068051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550985 ] 00:27:05.362 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.362 [2024-07-26 18:27:29.100295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:05.362 [2024-07-26 18:27:29.130435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.362 [2024-07-26 18:27:29.216841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.362 [2024-07-26 18:27:29.814205] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 1be336ca-e717-4137-8a2a-106684939458 already exists 00:27:05.362 [2024-07-26 18:27:29.814249] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:1be336ca-e717-4137-8a2a-106684939458 alias for bdev NVMe1n1 00:27:05.362 [2024-07-26 18:27:29.814265] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:05.362 Running I/O for 1 seconds... 00:27:05.362 00:27:05.362 Latency(us) 00:27:05.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.362 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:05.362 NVMe0n1 : 1.01 17454.25 68.18 0.00 0.00 7312.87 4296.25 16408.27 00:27:05.362 =================================================================================================================== 00:27:05.362 Total : 17454.25 68.18 0.00 0.00 7312.87 4296.25 16408.27 00:27:05.362 Received shutdown signal, test time was about 1.000000 seconds 00:27:05.362 00:27:05.362 Latency(us) 00:27:05.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.362 =================================================================================================================== 00:27:05.362 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.362 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:05.362 rmmod nvme_tcp 00:27:05.362 rmmod nvme_fabrics 00:27:05.362 rmmod nvme_keyring 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1550902 ']' 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1550902 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1550902 ']' 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1550902 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1550902 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1550902' 00:27:05.362 killing process with pid 1550902 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1550902 00:27:05.362 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1550902 00:27:05.622 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:05.622 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:05.622 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:05.622 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.622 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.622 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.622 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.622 18:27:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:08.151 00:27:08.151 real 0m7.285s 00:27:08.151 user 0m11.442s 00:27:08.151 sys 0m2.221s 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:08.151 ************************************ 00:27:08.151 END TEST nvmf_multicontroller 00:27:08.151 ************************************ 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.151 ************************************ 00:27:08.151 START TEST nvmf_aer 00:27:08.151 ************************************ 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:08.151 * Looking for test storage... 00:27:08.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:08.151 18:27:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:09.525 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:09.525 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:09.525 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:09.525 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.525 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:09.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:27:09.784 00:27:09.784 --- 10.0.0.2 ping statistics --- 00:27:09.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.784 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:27:09.784 00:27:09.784 --- 10.0.0.1 ping statistics --- 00:27:09.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.784 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1553195 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1553195 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1553195 ']' 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:09.784 18:27:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.784 [2024-07-26 18:27:35.844920] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:09.784 [2024-07-26 18:27:35.844986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.784 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.784 [2024-07-26 18:27:35.880957] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:09.784 [2024-07-26 18:27:35.911071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.042 [2024-07-26 18:27:36.002362] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.042 [2024-07-26 18:27:36.002421] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.042 [2024-07-26 18:27:36.002438] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.042 [2024-07-26 18:27:36.002452] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.042 [2024-07-26 18:27:36.002464] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.042 [2024-07-26 18:27:36.002521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.042 [2024-07-26 18:27:36.002573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.042 [2024-07-26 18:27:36.002692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.042 [2024-07-26 18:27:36.002694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.042 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:10.042 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:27:10.042 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:10.042 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:10.042 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.042 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.042 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:10.042 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.042 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.042 [2024-07-26 18:27:36.140216] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.043 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.043 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:10.043 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.043 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.043 Malloc0 00:27:10.043 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.043 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:10.043 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.043 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.043 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.043 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:10.043 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.043 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.300 [2024-07-26 18:27:36.191013] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.300 [ 00:27:10.300 { 00:27:10.300 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:10.300 "subtype": "Discovery", 00:27:10.300 "listen_addresses": [], 00:27:10.300 "allow_any_host": true, 00:27:10.300 "hosts": [] 00:27:10.300 }, 00:27:10.300 { 00:27:10.300 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:10.300 "subtype": "NVMe", 00:27:10.300 "listen_addresses": [ 00:27:10.300 { 00:27:10.300 "trtype": "TCP", 00:27:10.300 "adrfam": "IPv4", 00:27:10.300 "traddr": "10.0.0.2", 00:27:10.300 "trsvcid": "4420" 00:27:10.300 } 00:27:10.300 ], 00:27:10.300 "allow_any_host": true, 00:27:10.300 "hosts": [], 00:27:10.300 "serial_number": "SPDK00000000000001", 00:27:10.300 "model_number": "SPDK bdev Controller", 00:27:10.300 "max_namespaces": 2, 00:27:10.300 "min_cntlid": 1, 00:27:10.300 "max_cntlid": 65519, 00:27:10.300 "namespaces": [ 00:27:10.300 { 00:27:10.300 "nsid": 1, 00:27:10.300 "bdev_name": "Malloc0", 00:27:10.300 "name": "Malloc0", 00:27:10.300 "nguid": "1B7EF021147D4B4898ACF282615F4509", 00:27:10.300 "uuid": "1b7ef021-147d-4b48-98ac-f282615f4509" 00:27:10.300 } 00:27:10.300 ] 00:27:10.300 } 00:27:10.300 ] 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1553218 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:10.300 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:27:10.300 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.559 Malloc1 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.559 Asynchronous Event Request test 00:27:10.559 Attaching to 10.0.0.2 00:27:10.559 Attached to 10.0.0.2 00:27:10.559 Registering asynchronous event callbacks... 00:27:10.559 Starting namespace attribute notice tests for all controllers... 00:27:10.559 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:10.559 aer_cb - Changed Namespace 00:27:10.559 Cleaning up... 00:27:10.559 [ 00:27:10.559 { 00:27:10.559 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:10.559 "subtype": "Discovery", 00:27:10.559 "listen_addresses": [], 00:27:10.559 "allow_any_host": true, 00:27:10.559 "hosts": [] 00:27:10.559 }, 00:27:10.559 { 00:27:10.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:10.559 "subtype": "NVMe", 00:27:10.559 "listen_addresses": [ 00:27:10.559 { 00:27:10.559 "trtype": "TCP", 00:27:10.559 "adrfam": "IPv4", 00:27:10.559 "traddr": "10.0.0.2", 00:27:10.559 "trsvcid": "4420" 00:27:10.559 } 00:27:10.559 ], 00:27:10.559 "allow_any_host": true, 00:27:10.559 "hosts": [], 00:27:10.559 "serial_number": "SPDK00000000000001", 00:27:10.559 "model_number": "SPDK bdev Controller", 00:27:10.559 "max_namespaces": 2, 00:27:10.559 "min_cntlid": 1, 00:27:10.559 "max_cntlid": 65519, 00:27:10.559 "namespaces": [ 00:27:10.559 { 00:27:10.559 "nsid": 1, 00:27:10.559 "bdev_name": "Malloc0", 00:27:10.559 "name": "Malloc0", 00:27:10.559 "nguid": "1B7EF021147D4B4898ACF282615F4509", 00:27:10.559 "uuid": "1b7ef021-147d-4b48-98ac-f282615f4509" 00:27:10.559 }, 00:27:10.559 { 00:27:10.559 "nsid": 2, 00:27:10.559 "bdev_name": "Malloc1", 00:27:10.559 "name": "Malloc1", 00:27:10.559 "nguid": "FAF6990B531443F9858124A34C772680", 00:27:10.559 "uuid": "faf6990b-5314-43f9-8581-24a34c772680" 00:27:10.559 } 00:27:10.559 ] 00:27:10.559 } 00:27:10.559 ] 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1553218 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:10.559 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:10.559 rmmod nvme_tcp 00:27:10.559 rmmod nvme_fabrics 00:27:10.559 rmmod nvme_keyring 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1553195 ']' 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1553195 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1553195 ']' 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1553195 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1553195 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1553195' 00:27:10.818 killing process with pid 1553195 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1553195 00:27:10.818 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1553195 00:27:11.078 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:11.078 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:11.078 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:11.078 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:11.078 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:11.078 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.078 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.078 18:27:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.984 18:27:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:12.984 00:27:12.984 real 0m5.289s 00:27:12.984 user 0m4.441s 00:27:12.984 sys 0m1.769s 00:27:12.984 18:27:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:12.984 18:27:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:12.984 ************************************ 00:27:12.984 END TEST nvmf_aer 00:27:12.984 ************************************ 00:27:12.984 18:27:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:12.984 18:27:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:12.984 18:27:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:12.984 18:27:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.984 ************************************ 00:27:12.984 START TEST nvmf_async_init 00:27:12.984 ************************************ 00:27:12.984 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:12.984 * Looking for test storage... 00:27:13.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:13.243 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=83d28d7f30e34a65b13108468840a8b4 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:13.244 18:27:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.149 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.149 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:15.149 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:15.149 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:15.149 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:15.149 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:15.150 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:15.150 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:15.150 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:15.150 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:15.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:27:15.150 00:27:15.150 --- 10.0.0.2 ping statistics --- 00:27:15.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.150 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:15.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:27:15.150 00:27:15.150 --- 10.0.0.1 ping statistics --- 00:27:15.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.150 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:15.150 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1555267 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1555267 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1555267 ']' 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:15.151 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.151 [2024-07-26 18:27:41.234541] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:15.151 [2024-07-26 18:27:41.234633] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.151 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.151 [2024-07-26 18:27:41.271683] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:15.410 [2024-07-26 18:27:41.302901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.410 [2024-07-26 18:27:41.396887] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.410 [2024-07-26 18:27:41.396951] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.410 [2024-07-26 18:27:41.396975] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:15.410 [2024-07-26 18:27:41.396986] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:15.410 [2024-07-26 18:27:41.396997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.410 [2024-07-26 18:27:41.397028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.410 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.410 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:27:15.410 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:15.410 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:15.410 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.410 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.410 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:15.410 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.410 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.410 [2024-07-26 18:27:41.549301] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.410 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.410 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:15.410 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.410 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.669 null0 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 83d28d7f30e34a65b13108468840a8b4 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.669 [2024-07-26 18:27:41.589584] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.669 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.929 nvme0n1 00:27:15.929 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.929 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:15.929 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.929 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.929 [ 00:27:15.929 { 00:27:15.929 "name": "nvme0n1", 00:27:15.929 "aliases": [ 00:27:15.929 "83d28d7f-30e3-4a65-b131-08468840a8b4" 00:27:15.929 ], 00:27:15.929 "product_name": "NVMe disk", 00:27:15.929 "block_size": 512, 00:27:15.929 "num_blocks": 2097152, 00:27:15.929 "uuid": "83d28d7f-30e3-4a65-b131-08468840a8b4", 00:27:15.929 "assigned_rate_limits": { 00:27:15.929 "rw_ios_per_sec": 0, 00:27:15.929 "rw_mbytes_per_sec": 0, 00:27:15.929 "r_mbytes_per_sec": 0, 00:27:15.929 "w_mbytes_per_sec": 0 00:27:15.929 }, 00:27:15.929 "claimed": false, 00:27:15.929 "zoned": false, 00:27:15.929 "supported_io_types": { 00:27:15.929 "read": true, 00:27:15.929 "write": true, 00:27:15.929 "unmap": false, 00:27:15.929 "flush": true, 00:27:15.929 "reset": true, 00:27:15.929 "nvme_admin": true, 00:27:15.929 "nvme_io": true, 00:27:15.929 "nvme_io_md": false, 00:27:15.929 "write_zeroes": true, 00:27:15.929 "zcopy": false, 00:27:15.929 "get_zone_info": false, 00:27:15.929 "zone_management": false, 00:27:15.929 "zone_append": false, 00:27:15.929 "compare": true, 00:27:15.929 "compare_and_write": true, 00:27:15.929 "abort": true, 00:27:15.929 "seek_hole": false, 00:27:15.929 "seek_data": false, 00:27:15.929 "copy": true, 00:27:15.929 "nvme_iov_md": false 00:27:15.929 }, 00:27:15.929 "memory_domains": [ 00:27:15.929 { 00:27:15.929 "dma_device_id": "system", 00:27:15.929 "dma_device_type": 1 00:27:15.929 } 00:27:15.929 ], 00:27:15.929 "driver_specific": { 00:27:15.929 "nvme": [ 00:27:15.929 { 00:27:15.929 "trid": { 00:27:15.929 "trtype": "TCP", 00:27:15.929 "adrfam": "IPv4", 00:27:15.929 "traddr": "10.0.0.2", 00:27:15.929 "trsvcid": "4420", 00:27:15.929 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:15.929 }, 00:27:15.929 "ctrlr_data": { 00:27:15.929 "cntlid": 1, 00:27:15.929 "vendor_id": "0x8086", 00:27:15.929 "model_number": "SPDK bdev Controller", 00:27:15.929 "serial_number": "00000000000000000000", 00:27:15.929 "firmware_revision": "24.09", 00:27:15.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:15.929 "oacs": { 00:27:15.929 "security": 0, 00:27:15.929 "format": 0, 00:27:15.929 "firmware": 0, 00:27:15.929 "ns_manage": 0 00:27:15.929 }, 00:27:15.929 "multi_ctrlr": true, 00:27:15.929 "ana_reporting": false 00:27:15.929 }, 00:27:15.929 "vs": { 00:27:15.929 "nvme_version": "1.3" 00:27:15.929 }, 00:27:15.929 "ns_data": { 00:27:15.929 "id": 1, 00:27:15.929 "can_share": true 00:27:15.929 } 00:27:15.929 } 00:27:15.929 ], 00:27:15.929 "mp_policy": "active_passive" 00:27:15.929 } 00:27:15.929 } 00:27:15.929 ] 00:27:15.929 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.929 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:15.929 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.929 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.929 [2024-07-26 18:27:41.842143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:15.929 [2024-07-26 18:27:41.842231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995850 (9): Bad file descriptor 00:27:15.929 [2024-07-26 18:27:41.984196] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:15.929 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.929 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:15.929 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.929 18:27:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.929 [ 00:27:15.929 { 00:27:15.929 "name": "nvme0n1", 00:27:15.929 "aliases": [ 00:27:15.929 "83d28d7f-30e3-4a65-b131-08468840a8b4" 00:27:15.929 ], 00:27:15.929 "product_name": "NVMe disk", 00:27:15.929 "block_size": 512, 00:27:15.929 "num_blocks": 2097152, 00:27:15.929 "uuid": "83d28d7f-30e3-4a65-b131-08468840a8b4", 00:27:15.929 "assigned_rate_limits": { 00:27:15.929 "rw_ios_per_sec": 0, 00:27:15.929 "rw_mbytes_per_sec": 0, 00:27:15.929 "r_mbytes_per_sec": 0, 00:27:15.929 "w_mbytes_per_sec": 0 00:27:15.929 }, 00:27:15.929 "claimed": false, 00:27:15.929 "zoned": false, 00:27:15.929 "supported_io_types": { 00:27:15.929 "read": true, 00:27:15.929 "write": true, 00:27:15.929 "unmap": false, 00:27:15.929 "flush": true, 00:27:15.929 "reset": true, 00:27:15.929 "nvme_admin": true, 00:27:15.929 "nvme_io": true, 00:27:15.929 "nvme_io_md": false, 00:27:15.929 "write_zeroes": true, 00:27:15.929 "zcopy": false, 00:27:15.929 "get_zone_info": false, 00:27:15.929 "zone_management": false, 00:27:15.929 "zone_append": false, 00:27:15.929 "compare": true, 00:27:15.929 "compare_and_write": true, 00:27:15.929 "abort": true, 00:27:15.929 "seek_hole": false, 00:27:15.929 "seek_data": false, 00:27:15.929 "copy": true, 00:27:15.929 "nvme_iov_md": false 00:27:15.929 }, 00:27:15.929 "memory_domains": [ 00:27:15.929 { 00:27:15.929 "dma_device_id": "system", 00:27:15.929 "dma_device_type": 1 00:27:15.929 } 00:27:15.929 ], 00:27:15.929 "driver_specific": { 00:27:15.929 "nvme": [ 00:27:15.929 { 00:27:15.929 "trid": { 00:27:15.929 "trtype": "TCP", 00:27:15.929 "adrfam": "IPv4", 00:27:15.929 "traddr": "10.0.0.2", 00:27:15.929 "trsvcid": "4420", 00:27:15.929 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:15.929 }, 00:27:15.929 "ctrlr_data": { 00:27:15.929 "cntlid": 2, 00:27:15.929 "vendor_id": "0x8086", 00:27:15.929 "model_number": "SPDK bdev Controller", 00:27:15.929 "serial_number": "00000000000000000000", 00:27:15.929 "firmware_revision": "24.09", 00:27:15.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:15.930 "oacs": { 00:27:15.930 "security": 0, 00:27:15.930 "format": 0, 00:27:15.930 "firmware": 0, 00:27:15.930 "ns_manage": 0 00:27:15.930 }, 00:27:15.930 "multi_ctrlr": true, 00:27:15.930 "ana_reporting": false 00:27:15.930 }, 00:27:15.930 "vs": { 00:27:15.930 "nvme_version": "1.3" 00:27:15.930 }, 00:27:15.930 "ns_data": { 00:27:15.930 "id": 1, 00:27:15.930 "can_share": true 00:27:15.930 } 00:27:15.930 } 00:27:15.930 ], 00:27:15.930 "mp_policy": "active_passive" 00:27:15.930 } 00:27:15.930 } 00:27:15.930 ] 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.9YMS1Yxmrr 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.9YMS1Yxmrr 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.930 [2024-07-26 18:27:42.034777] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:15.930 [2024-07-26 18:27:42.034951] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9YMS1Yxmrr 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.930 [2024-07-26 18:27:42.042784] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9YMS1Yxmrr 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.930 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:15.930 [2024-07-26 18:27:42.050799] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:15.930 [2024-07-26 18:27:42.050853] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:16.189 nvme0n1 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:16.189 [ 00:27:16.189 { 00:27:16.189 "name": "nvme0n1", 00:27:16.189 "aliases": [ 00:27:16.189 "83d28d7f-30e3-4a65-b131-08468840a8b4" 00:27:16.189 ], 00:27:16.189 "product_name": "NVMe disk", 00:27:16.189 "block_size": 512, 00:27:16.189 "num_blocks": 2097152, 00:27:16.189 "uuid": "83d28d7f-30e3-4a65-b131-08468840a8b4", 00:27:16.189 "assigned_rate_limits": { 00:27:16.189 "rw_ios_per_sec": 0, 00:27:16.189 "rw_mbytes_per_sec": 0, 00:27:16.189 "r_mbytes_per_sec": 0, 00:27:16.189 "w_mbytes_per_sec": 0 00:27:16.189 }, 00:27:16.189 "claimed": false, 00:27:16.189 "zoned": false, 00:27:16.189 "supported_io_types": { 00:27:16.189 "read": true, 00:27:16.189 "write": true, 00:27:16.189 "unmap": false, 00:27:16.189 "flush": true, 00:27:16.189 "reset": true, 00:27:16.189 "nvme_admin": true, 00:27:16.189 "nvme_io": true, 00:27:16.189 "nvme_io_md": false, 00:27:16.189 "write_zeroes": true, 00:27:16.189 "zcopy": false, 00:27:16.189 "get_zone_info": false, 00:27:16.189 "zone_management": false, 00:27:16.189 "zone_append": false, 00:27:16.189 "compare": true, 00:27:16.189 "compare_and_write": true, 00:27:16.189 "abort": true, 00:27:16.189 "seek_hole": false, 00:27:16.189 "seek_data": false, 00:27:16.189 "copy": true, 00:27:16.189 "nvme_iov_md": false 00:27:16.189 }, 00:27:16.189 "memory_domains": [ 00:27:16.189 { 00:27:16.189 "dma_device_id": "system", 00:27:16.189 "dma_device_type": 1 00:27:16.189 } 00:27:16.189 ], 00:27:16.189 "driver_specific": { 00:27:16.189 "nvme": [ 00:27:16.189 { 00:27:16.189 "trid": { 00:27:16.189 "trtype": "TCP", 00:27:16.189 "adrfam": "IPv4", 00:27:16.189 "traddr": "10.0.0.2", 00:27:16.189 "trsvcid": "4421", 00:27:16.189 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:16.189 }, 00:27:16.189 "ctrlr_data": { 00:27:16.189 "cntlid": 3, 00:27:16.189 "vendor_id": "0x8086", 00:27:16.189 "model_number": "SPDK bdev Controller", 00:27:16.189 "serial_number": "00000000000000000000", 00:27:16.189 "firmware_revision": "24.09", 00:27:16.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:16.189 "oacs": { 00:27:16.189 "security": 0, 00:27:16.189 "format": 0, 00:27:16.189 "firmware": 0, 00:27:16.189 "ns_manage": 0 00:27:16.189 }, 00:27:16.189 "multi_ctrlr": true, 00:27:16.189 "ana_reporting": false 00:27:16.189 }, 00:27:16.189 "vs": { 00:27:16.189 "nvme_version": "1.3" 00:27:16.189 }, 00:27:16.189 "ns_data": { 00:27:16.189 "id": 1, 00:27:16.189 "can_share": true 00:27:16.189 } 00:27:16.189 } 00:27:16.189 ], 00:27:16.189 "mp_policy": "active_passive" 00:27:16.189 } 00:27:16.189 } 00:27:16.189 ] 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.9YMS1Yxmrr 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.189 rmmod nvme_tcp 00:27:16.189 rmmod nvme_fabrics 00:27:16.189 rmmod nvme_keyring 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1555267 ']' 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1555267 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1555267 ']' 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1555267 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1555267 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1555267' 00:27:16.189 killing process with pid 1555267 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1555267 00:27:16.189 [2024-07-26 18:27:42.254772] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:16.189 [2024-07-26 18:27:42.254810] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:16.189 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1555267 00:27:16.450 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:16.450 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:16.450 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:16.450 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:16.450 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:16.450 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.450 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.450 18:27:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.391 18:27:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:18.391 00:27:18.391 real 0m5.417s 00:27:18.391 user 0m2.037s 00:27:18.391 sys 0m1.769s 00:27:18.391 18:27:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:18.391 18:27:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:18.391 ************************************ 00:27:18.391 END TEST nvmf_async_init 00:27:18.391 ************************************ 00:27:18.391 18:27:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:18.391 18:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:18.392 18:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:18.392 18:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.651 ************************************ 00:27:18.651 START TEST dma 00:27:18.651 ************************************ 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:18.651 * Looking for test storage... 00:27:18.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.651 18:27:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:18.652 00:27:18.652 real 0m0.063s 00:27:18.652 user 0m0.018s 00:27:18.652 sys 0m0.050s 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:18.652 ************************************ 00:27:18.652 END TEST dma 00:27:18.652 ************************************ 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.652 ************************************ 00:27:18.652 START TEST nvmf_identify 00:27:18.652 ************************************ 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:18.652 * Looking for test storage... 00:27:18.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:18.652 18:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:20.561 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:20.561 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:20.561 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:20.561 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:20.561 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.562 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:20.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:27:20.821 00:27:20.821 --- 10.0.0.2 ping statistics --- 00:27:20.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.821 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:27:20.821 00:27:20.821 --- 10.0.0.1 ping statistics --- 00:27:20.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.821 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1557344 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1557344 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1557344 ']' 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.821 18:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.821 [2024-07-26 18:27:46.863446] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:20.821 [2024-07-26 18:27:46.863537] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.821 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.821 [2024-07-26 18:27:46.907923] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:20.821 [2024-07-26 18:27:46.938307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:21.080 [2024-07-26 18:27:47.033297] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.080 [2024-07-26 18:27:47.033346] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.080 [2024-07-26 18:27:47.033360] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.080 [2024-07-26 18:27:47.033373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.080 [2024-07-26 18:27:47.033383] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.080 [2024-07-26 18:27:47.033703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.080 [2024-07-26 18:27:47.033744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.080 [2024-07-26 18:27:47.033772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.080 [2024-07-26 18:27:47.033774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:22.018 [2024-07-26 18:27:47.823619] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:22.018 Malloc0 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.018 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:22.019 [2024-07-26 18:27:47.895091] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:22.019 [ 00:27:22.019 { 00:27:22.019 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:22.019 "subtype": "Discovery", 00:27:22.019 "listen_addresses": [ 00:27:22.019 { 00:27:22.019 "trtype": "TCP", 00:27:22.019 "adrfam": "IPv4", 00:27:22.019 "traddr": "10.0.0.2", 00:27:22.019 "trsvcid": "4420" 00:27:22.019 } 00:27:22.019 ], 00:27:22.019 "allow_any_host": true, 00:27:22.019 "hosts": [] 00:27:22.019 }, 00:27:22.019 { 00:27:22.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:22.019 "subtype": "NVMe", 00:27:22.019 "listen_addresses": [ 00:27:22.019 { 00:27:22.019 "trtype": "TCP", 00:27:22.019 "adrfam": "IPv4", 00:27:22.019 "traddr": "10.0.0.2", 00:27:22.019 "trsvcid": "4420" 00:27:22.019 } 00:27:22.019 ], 00:27:22.019 "allow_any_host": true, 00:27:22.019 "hosts": [], 00:27:22.019 "serial_number": "SPDK00000000000001", 00:27:22.019 "model_number": "SPDK bdev Controller", 00:27:22.019 "max_namespaces": 32, 00:27:22.019 "min_cntlid": 1, 00:27:22.019 "max_cntlid": 65519, 00:27:22.019 "namespaces": [ 00:27:22.019 { 00:27:22.019 "nsid": 1, 00:27:22.019 "bdev_name": "Malloc0", 00:27:22.019 "name": "Malloc0", 00:27:22.019 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:22.019 "eui64": "ABCDEF0123456789", 00:27:22.019 "uuid": "3029d869-3a04-4af8-aad5-dd34ab9f362a" 00:27:22.019 } 00:27:22.019 ] 00:27:22.019 } 00:27:22.019 ] 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.019 18:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:22.019 [2024-07-26 18:27:47.935136] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:22.019 [2024-07-26 18:27:47.935177] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557489 ] 00:27:22.019 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.019 [2024-07-26 18:27:47.953753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:22.019 [2024-07-26 18:27:47.971299] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:22.019 [2024-07-26 18:27:47.971361] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:22.019 [2024-07-26 18:27:47.971391] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:22.019 [2024-07-26 18:27:47.971405] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:22.019 [2024-07-26 18:27:47.971418] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:22.019 [2024-07-26 18:27:47.971777] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:22.019 [2024-07-26 18:27:47.971827] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x127e630 0 00:27:22.019 [2024-07-26 18:27:47.978087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:22.019 [2024-07-26 18:27:47.978111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:22.019 [2024-07-26 18:27:47.978121] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:22.019 [2024-07-26 18:27:47.978127] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:22.019 [2024-07-26 18:27:47.978176] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.019 [2024-07-26 18:27:47.978188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.019 [2024-07-26 18:27:47.978196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e630) 00:27:22.019 [2024-07-26 18:27:47.978213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:22.019 [2024-07-26 18:27:47.978239] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ccf80, cid 0, qid 0 00:27:22.019 [2024-07-26 18:27:47.985074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.019 [2024-07-26 18:27:47.985093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.019 [2024-07-26 18:27:47.985101] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.019 [2024-07-26 18:27:47.985109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ccf80) on tqpair=0x127e630 00:27:22.019 [2024-07-26 18:27:47.985130] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:22.019 [2024-07-26 18:27:47.985141] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:22.019 [2024-07-26 18:27:47.985151] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:22.019 [2024-07-26 18:27:47.985172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.019 [2024-07-26 18:27:47.985180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.019 [2024-07-26 18:27:47.985187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e630) 00:27:22.019 [2024-07-26 18:27:47.985199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.019 [2024-07-26 18:27:47.985222] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ccf80, cid 0, qid 0 00:27:22.019 [2024-07-26 18:27:47.985408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.019 [2024-07-26 18:27:47.985423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.019 [2024-07-26 18:27:47.985430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.019 [2024-07-26 18:27:47.985437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ccf80) on tqpair=0x127e630 00:27:22.019 [2024-07-26 18:27:47.985450] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:22.019 [2024-07-26 18:27:47.985464] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:22.019 [2024-07-26 18:27:47.985477] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.019 [2024-07-26 18:27:47.985484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.019 [2024-07-26 18:27:47.985494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e630) 00:27:22.019 [2024-07-26 18:27:47.985506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.019 [2024-07-26 18:27:47.985541] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ccf80, cid 0, qid 0 00:27:22.019 [2024-07-26 18:27:47.985694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.019 [2024-07-26 18:27:47.985707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.019 [2024-07-26 18:27:47.985713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.019 [2024-07-26 18:27:47.985720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ccf80) on tqpair=0x127e630 00:27:22.019 [2024-07-26 18:27:47.985729] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:22.019 [2024-07-26 18:27:47.985742] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:22.019 [2024-07-26 18:27:47.985754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.019 [2024-07-26 18:27:47.985761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.019 [2024-07-26 18:27:47.985768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e630) 00:27:22.019 [2024-07-26 18:27:47.985778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.019 [2024-07-26 18:27:47.985799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ccf80, cid 0, qid 0 00:27:22.019 [2024-07-26 18:27:47.985948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.019 [2024-07-26 18:27:47.985959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.019 [2024-07-26 18:27:47.985966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.985973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ccf80) on tqpair=0x127e630 00:27:22.020 [2024-07-26 18:27:47.985982] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:22.020 [2024-07-26 18:27:47.985997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.986006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.986013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e630) 00:27:22.020 [2024-07-26 18:27:47.986023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.020 [2024-07-26 18:27:47.986057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ccf80, cid 0, qid 0 00:27:22.020 [2024-07-26 18:27:47.986207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.020 [2024-07-26 18:27:47.986222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.020 [2024-07-26 18:27:47.986229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.986236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ccf80) on tqpair=0x127e630 00:27:22.020 [2024-07-26 18:27:47.986245] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:22.020 [2024-07-26 18:27:47.986254] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:22.020 [2024-07-26 18:27:47.986267] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:22.020 [2024-07-26 18:27:47.986381] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:22.020 [2024-07-26 18:27:47.986391] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:22.020 [2024-07-26 18:27:47.986423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.986431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.986438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e630) 00:27:22.020 [2024-07-26 18:27:47.986448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.020 [2024-07-26 18:27:47.986468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ccf80, cid 0, qid 0 00:27:22.020 [2024-07-26 18:27:47.986616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.020 [2024-07-26 18:27:47.986631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.020 [2024-07-26 18:27:47.986637] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.986644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ccf80) on tqpair=0x127e630 00:27:22.020 [2024-07-26 18:27:47.986652] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:22.020 [2024-07-26 18:27:47.986669] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.986677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.986684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e630) 00:27:22.020 [2024-07-26 18:27:47.986694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.020 [2024-07-26 18:27:47.986714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ccf80, cid 0, qid 0 00:27:22.020 [2024-07-26 18:27:47.986849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.020 [2024-07-26 18:27:47.986863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.020 [2024-07-26 18:27:47.986870] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.986877] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ccf80) on tqpair=0x127e630 00:27:22.020 [2024-07-26 18:27:47.986885] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:22.020 [2024-07-26 18:27:47.986893] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:22.020 [2024-07-26 18:27:47.986906] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:22.020 [2024-07-26 18:27:47.986920] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:22.020 [2024-07-26 18:27:47.986935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.986943] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e630) 00:27:22.020 [2024-07-26 18:27:47.986954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.020 [2024-07-26 18:27:47.986974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ccf80, cid 0, qid 0 00:27:22.020 [2024-07-26 18:27:47.987185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:22.020 [2024-07-26 18:27:47.987202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:22.020 [2024-07-26 18:27:47.987209] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987216] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127e630): datao=0, datal=4096, cccid=0 00:27:22.020 [2024-07-26 18:27:47.987224] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ccf80) on tqpair(0x127e630): expected_datao=0, payload_size=4096 00:27:22.020 [2024-07-26 18:27:47.987237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987249] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987257] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.020 [2024-07-26 18:27:47.987302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.020 [2024-07-26 18:27:47.987309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ccf80) on tqpair=0x127e630 00:27:22.020 [2024-07-26 18:27:47.987327] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:22.020 [2024-07-26 18:27:47.987336] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:22.020 [2024-07-26 18:27:47.987344] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:22.020 [2024-07-26 18:27:47.987358] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:22.020 [2024-07-26 18:27:47.987381] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:22.020 [2024-07-26 18:27:47.987390] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:22.020 [2024-07-26 18:27:47.987404] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:22.020 [2024-07-26 18:27:47.987429] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e630) 00:27:22.020 [2024-07-26 18:27:47.987455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:22.020 [2024-07-26 18:27:47.987475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ccf80, cid 0, qid 0 00:27:22.020 [2024-07-26 18:27:47.987647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.020 [2024-07-26 18:27:47.987662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.020 [2024-07-26 18:27:47.987669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ccf80) on tqpair=0x127e630 00:27:22.020 [2024-07-26 18:27:47.987687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e630) 00:27:22.020 [2024-07-26 18:27:47.987711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.020 [2024-07-26 18:27:47.987721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x127e630) 00:27:22.020 [2024-07-26 18:27:47.987743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.020 [2024-07-26 18:27:47.987753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x127e630) 00:27:22.020 [2024-07-26 18:27:47.987778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.020 [2024-07-26 18:27:47.987788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.020 [2024-07-26 18:27:47.987802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.020 [2024-07-26 18:27:47.987810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.020 [2024-07-26 18:27:47.987819] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:22.021 [2024-07-26 18:27:47.987837] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:22.021 [2024-07-26 18:27:47.987849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:47.987857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127e630) 00:27:22.021 [2024-07-26 18:27:47.987867] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.021 [2024-07-26 18:27:47.987902] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ccf80, cid 0, qid 0 00:27:22.021 [2024-07-26 18:27:47.987913] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd100, cid 1, qid 0 00:27:22.021 [2024-07-26 18:27:47.987921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd280, cid 2, qid 0 00:27:22.021 [2024-07-26 18:27:47.987928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.021 [2024-07-26 18:27:47.987950] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd580, cid 4, qid 0 00:27:22.021 [2024-07-26 18:27:47.988142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.021 [2024-07-26 18:27:47.988158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.021 [2024-07-26 18:27:47.988166] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:47.988173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd580) on tqpair=0x127e630 00:27:22.021 [2024-07-26 18:27:47.988181] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:22.021 [2024-07-26 18:27:47.988191] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:22.021 [2024-07-26 18:27:47.988209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:47.988218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127e630) 00:27:22.021 [2024-07-26 18:27:47.988229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.021 [2024-07-26 18:27:47.988265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd580, cid 4, qid 0 00:27:22.021 [2024-07-26 18:27:47.988446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:22.021 [2024-07-26 18:27:47.988461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:22.021 [2024-07-26 18:27:47.988468] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:47.988475] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127e630): datao=0, datal=4096, cccid=4 00:27:22.021 [2024-07-26 18:27:47.988482] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12cd580) on tqpair(0x127e630): expected_datao=0, payload_size=4096 00:27:22.021 [2024-07-26 18:27:47.988490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:47.988516] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:47.988525] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.031090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.021 [2024-07-26 18:27:48.031109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.021 [2024-07-26 18:27:48.031117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.031124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd580) on tqpair=0x127e630 00:27:22.021 [2024-07-26 18:27:48.031143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:22.021 [2024-07-26 18:27:48.031179] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.031190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127e630) 00:27:22.021 [2024-07-26 18:27:48.031201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.021 [2024-07-26 18:27:48.031213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.031220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.031226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x127e630) 00:27:22.021 [2024-07-26 18:27:48.031236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.021 [2024-07-26 18:27:48.031263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd580, cid 4, qid 0 00:27:22.021 [2024-07-26 18:27:48.031291] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd700, cid 5, qid 0 00:27:22.021 [2024-07-26 18:27:48.031497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:22.021 [2024-07-26 18:27:48.031509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:22.021 [2024-07-26 18:27:48.031516] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.031522] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127e630): datao=0, datal=1024, cccid=4 00:27:22.021 [2024-07-26 18:27:48.031530] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12cd580) on tqpair(0x127e630): expected_datao=0, payload_size=1024 00:27:22.021 [2024-07-26 18:27:48.031538] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.031548] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.031555] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.031563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.021 [2024-07-26 18:27:48.031587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.021 [2024-07-26 18:27:48.031593] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.031600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd700) on tqpair=0x127e630 00:27:22.021 [2024-07-26 18:27:48.072225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.021 [2024-07-26 18:27:48.072245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.021 [2024-07-26 18:27:48.072253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.072260] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd580) on tqpair=0x127e630 00:27:22.021 [2024-07-26 18:27:48.072278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.072288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127e630) 00:27:22.021 [2024-07-26 18:27:48.072300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.021 [2024-07-26 18:27:48.072329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd580, cid 4, qid 0 00:27:22.021 [2024-07-26 18:27:48.072555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:22.021 [2024-07-26 18:27:48.072583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:22.021 [2024-07-26 18:27:48.072597] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.072604] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127e630): datao=0, datal=3072, cccid=4 00:27:22.021 [2024-07-26 18:27:48.072612] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12cd580) on tqpair(0x127e630): expected_datao=0, payload_size=3072 00:27:22.021 [2024-07-26 18:27:48.072620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.072631] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.072639] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.072682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.021 [2024-07-26 18:27:48.072693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.021 [2024-07-26 18:27:48.072700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.072707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd580) on tqpair=0x127e630 00:27:22.021 [2024-07-26 18:27:48.072722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.072731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127e630) 00:27:22.021 [2024-07-26 18:27:48.072742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.021 [2024-07-26 18:27:48.072784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd580, cid 4, qid 0 00:27:22.021 [2024-07-26 18:27:48.072980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:22.021 [2024-07-26 18:27:48.072992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:22.021 [2024-07-26 18:27:48.072999] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.073006] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127e630): datao=0, datal=8, cccid=4 00:27:22.021 [2024-07-26 18:27:48.073013] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12cd580) on tqpair(0x127e630): expected_datao=0, payload_size=8 00:27:22.021 [2024-07-26 18:27:48.073021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.073031] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.073038] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.113222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.021 [2024-07-26 18:27:48.113240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.021 [2024-07-26 18:27:48.113248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.021 [2024-07-26 18:27:48.113255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd580) on tqpair=0x127e630 00:27:22.021 ===================================================== 00:27:22.021 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:22.021 ===================================================== 00:27:22.021 Controller Capabilities/Features 00:27:22.021 ================================ 00:27:22.021 Vendor ID: 0000 00:27:22.022 Subsystem Vendor ID: 0000 00:27:22.022 Serial Number: .................... 00:27:22.022 Model Number: ........................................ 00:27:22.022 Firmware Version: 24.09 00:27:22.022 Recommended Arb Burst: 0 00:27:22.022 IEEE OUI Identifier: 00 00 00 00:27:22.022 Multi-path I/O 00:27:22.022 May have multiple subsystem ports: No 00:27:22.022 May have multiple controllers: No 00:27:22.022 Associated with SR-IOV VF: No 00:27:22.022 Max Data Transfer Size: 131072 00:27:22.022 Max Number of Namespaces: 0 00:27:22.022 Max Number of I/O Queues: 1024 00:27:22.022 NVMe Specification Version (VS): 1.3 00:27:22.022 NVMe Specification Version (Identify): 1.3 00:27:22.022 Maximum Queue Entries: 128 00:27:22.022 Contiguous Queues Required: Yes 00:27:22.022 Arbitration Mechanisms Supported 00:27:22.022 Weighted Round Robin: Not Supported 00:27:22.022 Vendor Specific: Not Supported 00:27:22.022 Reset Timeout: 15000 ms 00:27:22.022 Doorbell Stride: 4 bytes 00:27:22.022 NVM Subsystem Reset: Not Supported 00:27:22.022 Command Sets Supported 00:27:22.022 NVM Command Set: Supported 00:27:22.022 Boot Partition: Not Supported 00:27:22.022 Memory Page Size Minimum: 4096 bytes 00:27:22.022 Memory Page Size Maximum: 4096 bytes 00:27:22.022 Persistent Memory Region: Not Supported 00:27:22.022 Optional Asynchronous Events Supported 00:27:22.022 Namespace Attribute Notices: Not Supported 00:27:22.022 Firmware Activation Notices: Not Supported 00:27:22.022 ANA Change Notices: Not Supported 00:27:22.022 PLE Aggregate Log Change Notices: Not Supported 00:27:22.022 LBA Status Info Alert Notices: Not Supported 00:27:22.022 EGE Aggregate Log Change Notices: Not Supported 00:27:22.022 Normal NVM Subsystem Shutdown event: Not Supported 00:27:22.022 Zone Descriptor Change Notices: Not Supported 00:27:22.022 Discovery Log Change Notices: Supported 00:27:22.022 Controller Attributes 00:27:22.022 128-bit Host Identifier: Not Supported 00:27:22.022 Non-Operational Permissive Mode: Not Supported 00:27:22.022 NVM Sets: Not Supported 00:27:22.022 Read Recovery Levels: Not Supported 00:27:22.022 Endurance Groups: Not Supported 00:27:22.022 Predictable Latency Mode: Not Supported 00:27:22.022 Traffic Based Keep ALive: Not Supported 00:27:22.022 Namespace Granularity: Not Supported 00:27:22.022 SQ Associations: Not Supported 00:27:22.022 UUID List: Not Supported 00:27:22.022 Multi-Domain Subsystem: Not Supported 00:27:22.022 Fixed Capacity Management: Not Supported 00:27:22.022 Variable Capacity Management: Not Supported 00:27:22.022 Delete Endurance Group: Not Supported 00:27:22.022 Delete NVM Set: Not Supported 00:27:22.022 Extended LBA Formats Supported: Not Supported 00:27:22.022 Flexible Data Placement Supported: Not Supported 00:27:22.022 00:27:22.022 Controller Memory Buffer Support 00:27:22.022 ================================ 00:27:22.022 Supported: No 00:27:22.022 00:27:22.022 Persistent Memory Region Support 00:27:22.022 ================================ 00:27:22.022 Supported: No 00:27:22.022 00:27:22.022 Admin Command Set Attributes 00:27:22.022 ============================ 00:27:22.022 Security Send/Receive: Not Supported 00:27:22.022 Format NVM: Not Supported 00:27:22.022 Firmware Activate/Download: Not Supported 00:27:22.022 Namespace Management: Not Supported 00:27:22.022 Device Self-Test: Not Supported 00:27:22.022 Directives: Not Supported 00:27:22.022 NVMe-MI: Not Supported 00:27:22.022 Virtualization Management: Not Supported 00:27:22.022 Doorbell Buffer Config: Not Supported 00:27:22.022 Get LBA Status Capability: Not Supported 00:27:22.022 Command & Feature Lockdown Capability: Not Supported 00:27:22.022 Abort Command Limit: 1 00:27:22.022 Async Event Request Limit: 4 00:27:22.022 Number of Firmware Slots: N/A 00:27:22.022 Firmware Slot 1 Read-Only: N/A 00:27:22.022 Firmware Activation Without Reset: N/A 00:27:22.022 Multiple Update Detection Support: N/A 00:27:22.022 Firmware Update Granularity: No Information Provided 00:27:22.022 Per-Namespace SMART Log: No 00:27:22.022 Asymmetric Namespace Access Log Page: Not Supported 00:27:22.022 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:22.022 Command Effects Log Page: Not Supported 00:27:22.022 Get Log Page Extended Data: Supported 00:27:22.022 Telemetry Log Pages: Not Supported 00:27:22.022 Persistent Event Log Pages: Not Supported 00:27:22.022 Supported Log Pages Log Page: May Support 00:27:22.022 Commands Supported & Effects Log Page: Not Supported 00:27:22.022 Feature Identifiers & Effects Log Page:May Support 00:27:22.022 NVMe-MI Commands & Effects Log Page: May Support 00:27:22.022 Data Area 4 for Telemetry Log: Not Supported 00:27:22.022 Error Log Page Entries Supported: 128 00:27:22.022 Keep Alive: Not Supported 00:27:22.022 00:27:22.022 NVM Command Set Attributes 00:27:22.022 ========================== 00:27:22.022 Submission Queue Entry Size 00:27:22.022 Max: 1 00:27:22.022 Min: 1 00:27:22.022 Completion Queue Entry Size 00:27:22.022 Max: 1 00:27:22.022 Min: 1 00:27:22.022 Number of Namespaces: 0 00:27:22.022 Compare Command: Not Supported 00:27:22.022 Write Uncorrectable Command: Not Supported 00:27:22.022 Dataset Management Command: Not Supported 00:27:22.022 Write Zeroes Command: Not Supported 00:27:22.022 Set Features Save Field: Not Supported 00:27:22.022 Reservations: Not Supported 00:27:22.022 Timestamp: Not Supported 00:27:22.022 Copy: Not Supported 00:27:22.022 Volatile Write Cache: Not Present 00:27:22.022 Atomic Write Unit (Normal): 1 00:27:22.022 Atomic Write Unit (PFail): 1 00:27:22.022 Atomic Compare & Write Unit: 1 00:27:22.022 Fused Compare & Write: Supported 00:27:22.022 Scatter-Gather List 00:27:22.022 SGL Command Set: Supported 00:27:22.022 SGL Keyed: Supported 00:27:22.022 SGL Bit Bucket Descriptor: Not Supported 00:27:22.022 SGL Metadata Pointer: Not Supported 00:27:22.022 Oversized SGL: Not Supported 00:27:22.022 SGL Metadata Address: Not Supported 00:27:22.022 SGL Offset: Supported 00:27:22.022 Transport SGL Data Block: Not Supported 00:27:22.022 Replay Protected Memory Block: Not Supported 00:27:22.022 00:27:22.022 Firmware Slot Information 00:27:22.022 ========================= 00:27:22.022 Active slot: 0 00:27:22.022 00:27:22.022 00:27:22.022 Error Log 00:27:22.022 ========= 00:27:22.022 00:27:22.022 Active Namespaces 00:27:22.022 ================= 00:27:22.022 Discovery Log Page 00:27:22.022 ================== 00:27:22.022 Generation Counter: 2 00:27:22.022 Number of Records: 2 00:27:22.022 Record Format: 0 00:27:22.022 00:27:22.022 Discovery Log Entry 0 00:27:22.022 ---------------------- 00:27:22.022 Transport Type: 3 (TCP) 00:27:22.022 Address Family: 1 (IPv4) 00:27:22.022 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:22.022 Entry Flags: 00:27:22.022 Duplicate Returned Information: 1 00:27:22.022 Explicit Persistent Connection Support for Discovery: 1 00:27:22.022 Transport Requirements: 00:27:22.022 Secure Channel: Not Required 00:27:22.022 Port ID: 0 (0x0000) 00:27:22.022 Controller ID: 65535 (0xffff) 00:27:22.022 Admin Max SQ Size: 128 00:27:22.022 Transport Service Identifier: 4420 00:27:22.022 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:22.022 Transport Address: 10.0.0.2 00:27:22.022 Discovery Log Entry 1 00:27:22.022 ---------------------- 00:27:22.023 Transport Type: 3 (TCP) 00:27:22.023 Address Family: 1 (IPv4) 00:27:22.023 Subsystem Type: 2 (NVM Subsystem) 00:27:22.023 Entry Flags: 00:27:22.023 Duplicate Returned Information: 0 00:27:22.023 Explicit Persistent Connection Support for Discovery: 0 00:27:22.023 Transport Requirements: 00:27:22.023 Secure Channel: Not Required 00:27:22.023 Port ID: 0 (0x0000) 00:27:22.023 Controller ID: 65535 (0xffff) 00:27:22.023 Admin Max SQ Size: 128 00:27:22.023 Transport Service Identifier: 4420 00:27:22.023 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:22.023 Transport Address: 10.0.0.2 [2024-07-26 18:27:48.113360] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:22.023 [2024-07-26 18:27:48.113382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ccf80) on tqpair=0x127e630 00:27:22.023 [2024-07-26 18:27:48.113393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.023 [2024-07-26 18:27:48.113402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd100) on tqpair=0x127e630 00:27:22.023 [2024-07-26 18:27:48.113410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.023 [2024-07-26 18:27:48.113419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd280) on tqpair=0x127e630 00:27:22.023 [2024-07-26 18:27:48.113427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.023 [2024-07-26 18:27:48.113450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.023 [2024-07-26 18:27:48.113458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.023 [2024-07-26 18:27:48.113478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.113487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.113494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.023 [2024-07-26 18:27:48.113505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.023 [2024-07-26 18:27:48.113544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.023 [2024-07-26 18:27:48.113689] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.023 [2024-07-26 18:27:48.113705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.023 [2024-07-26 18:27:48.113712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.113719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.023 [2024-07-26 18:27:48.113730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.113738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.113744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.023 [2024-07-26 18:27:48.113755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.023 [2024-07-26 18:27:48.113781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.023 [2024-07-26 18:27:48.113966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.023 [2024-07-26 18:27:48.113977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.023 [2024-07-26 18:27:48.113984] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.113991] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.023 [2024-07-26 18:27:48.114000] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:22.023 [2024-07-26 18:27:48.114008] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:22.023 [2024-07-26 18:27:48.114023] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.114032] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.114054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.023 [2024-07-26 18:27:48.114073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.023 [2024-07-26 18:27:48.114095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.023 [2024-07-26 18:27:48.114242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.023 [2024-07-26 18:27:48.114257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.023 [2024-07-26 18:27:48.114264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.114271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.023 [2024-07-26 18:27:48.114289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.114298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.114305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.023 [2024-07-26 18:27:48.114316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.023 [2024-07-26 18:27:48.114337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.023 [2024-07-26 18:27:48.114497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.023 [2024-07-26 18:27:48.114509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.023 [2024-07-26 18:27:48.114519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.114527] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.023 [2024-07-26 18:27:48.114542] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.114552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.114558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.023 [2024-07-26 18:27:48.114569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.023 [2024-07-26 18:27:48.114588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.023 [2024-07-26 18:27:48.114721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.023 [2024-07-26 18:27:48.114735] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.023 [2024-07-26 18:27:48.114742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.114749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.023 [2024-07-26 18:27:48.114765] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.114774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.114780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.023 [2024-07-26 18:27:48.114791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.023 [2024-07-26 18:27:48.114810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.023 [2024-07-26 18:27:48.114971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.023 [2024-07-26 18:27:48.114983] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.023 [2024-07-26 18:27:48.114990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.114997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.023 [2024-07-26 18:27:48.115012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.115021] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.115028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.023 [2024-07-26 18:27:48.115038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.023 [2024-07-26 18:27:48.115081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.023 [2024-07-26 18:27:48.115231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.023 [2024-07-26 18:27:48.115246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.023 [2024-07-26 18:27:48.115253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.115260] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.023 [2024-07-26 18:27:48.115277] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.115287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.023 [2024-07-26 18:27:48.115294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.024 [2024-07-26 18:27:48.115304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.024 [2024-07-26 18:27:48.115325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.024 [2024-07-26 18:27:48.115485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.024 [2024-07-26 18:27:48.115497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.024 [2024-07-26 18:27:48.115504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.115515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.024 [2024-07-26 18:27:48.115531] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.115540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.115547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.024 [2024-07-26 18:27:48.115558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.024 [2024-07-26 18:27:48.115577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.024 [2024-07-26 18:27:48.115704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.024 [2024-07-26 18:27:48.115719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.024 [2024-07-26 18:27:48.115725] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.115732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.024 [2024-07-26 18:27:48.115748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.115757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.115764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.024 [2024-07-26 18:27:48.115774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.024 [2024-07-26 18:27:48.115794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.024 [2024-07-26 18:27:48.115919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.024 [2024-07-26 18:27:48.115934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.024 [2024-07-26 18:27:48.115941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.115947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.024 [2024-07-26 18:27:48.115963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.115972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.115979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.024 [2024-07-26 18:27:48.115989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.024 [2024-07-26 18:27:48.116009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.024 [2024-07-26 18:27:48.116163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.024 [2024-07-26 18:27:48.116194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.024 [2024-07-26 18:27:48.116201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.116208] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.024 [2024-07-26 18:27:48.116225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.116234] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.116256] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.024 [2024-07-26 18:27:48.116267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.024 [2024-07-26 18:27:48.116289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.024 [2024-07-26 18:27:48.116441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.024 [2024-07-26 18:27:48.116456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.024 [2024-07-26 18:27:48.116463] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.116470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.024 [2024-07-26 18:27:48.116490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.116500] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.116506] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.024 [2024-07-26 18:27:48.116517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.024 [2024-07-26 18:27:48.116537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.024 [2024-07-26 18:27:48.116680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.024 [2024-07-26 18:27:48.116706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.024 [2024-07-26 18:27:48.116713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.116720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.024 [2024-07-26 18:27:48.116735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.116744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.116750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.024 [2024-07-26 18:27:48.116776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.024 [2024-07-26 18:27:48.116796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.024 [2024-07-26 18:27:48.116952] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.024 [2024-07-26 18:27:48.116964] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.024 [2024-07-26 18:27:48.116971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.116977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.024 [2024-07-26 18:27:48.116993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.117002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.117008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.024 [2024-07-26 18:27:48.117019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.024 [2024-07-26 18:27:48.117053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.024 [2024-07-26 18:27:48.117205] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.024 [2024-07-26 18:27:48.117221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.024 [2024-07-26 18:27:48.117228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.117235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.024 [2024-07-26 18:27:48.117251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.117261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.117268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.024 [2024-07-26 18:27:48.117279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.024 [2024-07-26 18:27:48.117300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.024 [2024-07-26 18:27:48.117456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.024 [2024-07-26 18:27:48.117471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.024 [2024-07-26 18:27:48.117478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.024 [2024-07-26 18:27:48.117485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.025 [2024-07-26 18:27:48.117501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.117514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.117521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.025 [2024-07-26 18:27:48.117531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.025 [2024-07-26 18:27:48.117552] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.025 [2024-07-26 18:27:48.117693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.025 [2024-07-26 18:27:48.117704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.025 [2024-07-26 18:27:48.117711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.117718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.025 [2024-07-26 18:27:48.117733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.117742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.117748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.025 [2024-07-26 18:27:48.117759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.025 [2024-07-26 18:27:48.117778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.025 [2024-07-26 18:27:48.117921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.025 [2024-07-26 18:27:48.117936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.025 [2024-07-26 18:27:48.117943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.117950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.025 [2024-07-26 18:27:48.117965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.117974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.117981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.025 [2024-07-26 18:27:48.117991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.025 [2024-07-26 18:27:48.118011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.025 [2024-07-26 18:27:48.118167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.025 [2024-07-26 18:27:48.118183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.025 [2024-07-26 18:27:48.118190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.118197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.025 [2024-07-26 18:27:48.118214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.118223] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.118230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.025 [2024-07-26 18:27:48.118257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.025 [2024-07-26 18:27:48.118278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.025 [2024-07-26 18:27:48.118459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.025 [2024-07-26 18:27:48.118472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.025 [2024-07-26 18:27:48.118478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.118485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.025 [2024-07-26 18:27:48.118501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.118510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.118522] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.025 [2024-07-26 18:27:48.118533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.025 [2024-07-26 18:27:48.118553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.025 [2024-07-26 18:27:48.118683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.025 [2024-07-26 18:27:48.118698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.025 [2024-07-26 18:27:48.118704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.118711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.025 [2024-07-26 18:27:48.118727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.118737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.118743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.025 [2024-07-26 18:27:48.118754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.025 [2024-07-26 18:27:48.118774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.025 [2024-07-26 18:27:48.118930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.025 [2024-07-26 18:27:48.118941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.025 [2024-07-26 18:27:48.118948] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.118955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.025 [2024-07-26 18:27:48.118970] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.118979] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.118986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.025 [2024-07-26 18:27:48.118996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.025 [2024-07-26 18:27:48.119016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.025 [2024-07-26 18:27:48.123089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.025 [2024-07-26 18:27:48.123105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.025 [2024-07-26 18:27:48.123122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.123129] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.025 [2024-07-26 18:27:48.123147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.123157] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.123163] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e630) 00:27:22.025 [2024-07-26 18:27:48.123174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.025 [2024-07-26 18:27:48.123196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cd400, cid 3, qid 0 00:27:22.025 [2024-07-26 18:27:48.123356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.025 [2024-07-26 18:27:48.123371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.025 [2024-07-26 18:27:48.123393] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.025 [2024-07-26 18:27:48.123400] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12cd400) on tqpair=0x127e630 00:27:22.025 [2024-07-26 18:27:48.123414] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 9 milliseconds 00:27:22.025 00:27:22.025 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:22.025 [2024-07-26 18:27:48.154987] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:22.025 [2024-07-26 18:27:48.155041] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557551 ] 00:27:22.287 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.287 [2024-07-26 18:27:48.172331] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:22.287 [2024-07-26 18:27:48.189887] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:22.287 [2024-07-26 18:27:48.189929] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:22.287 [2024-07-26 18:27:48.189938] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:22.287 [2024-07-26 18:27:48.189953] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:22.287 [2024-07-26 18:27:48.189964] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:22.287 [2024-07-26 18:27:48.190264] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:22.287 [2024-07-26 18:27:48.190303] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xdca630 0 00:27:22.287 [2024-07-26 18:27:48.197068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:22.287 [2024-07-26 18:27:48.197092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:22.287 [2024-07-26 18:27:48.197101] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:22.287 [2024-07-26 18:27:48.197107] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:22.287 [2024-07-26 18:27:48.197147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.287 [2024-07-26 18:27:48.197159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.287 [2024-07-26 18:27:48.197166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdca630) 00:27:22.287 [2024-07-26 18:27:48.197181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:22.287 [2024-07-26 18:27:48.197208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe18f80, cid 0, qid 0 00:27:22.287 [2024-07-26 18:27:48.205076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.287 [2024-07-26 18:27:48.205094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.287 [2024-07-26 18:27:48.205101] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.287 [2024-07-26 18:27:48.205109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe18f80) on tqpair=0xdca630 00:27:22.288 [2024-07-26 18:27:48.205127] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:22.288 [2024-07-26 18:27:48.205138] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:22.288 [2024-07-26 18:27:48.205147] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:22.288 [2024-07-26 18:27:48.205165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.205174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.205181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdca630) 00:27:22.288 [2024-07-26 18:27:48.205193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.288 [2024-07-26 18:27:48.205221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe18f80, cid 0, qid 0 00:27:22.288 [2024-07-26 18:27:48.205362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.288 [2024-07-26 18:27:48.205375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.288 [2024-07-26 18:27:48.205382] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.205389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe18f80) on tqpair=0xdca630 00:27:22.288 [2024-07-26 18:27:48.205408] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:22.288 [2024-07-26 18:27:48.205423] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:22.288 [2024-07-26 18:27:48.205435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.205443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.205450] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdca630) 00:27:22.288 [2024-07-26 18:27:48.205461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.288 [2024-07-26 18:27:48.205482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe18f80, cid 0, qid 0 00:27:22.288 [2024-07-26 18:27:48.205612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.288 [2024-07-26 18:27:48.205627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.288 [2024-07-26 18:27:48.205634] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.205641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe18f80) on tqpair=0xdca630 00:27:22.288 [2024-07-26 18:27:48.205650] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:22.288 [2024-07-26 18:27:48.205664] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:22.288 [2024-07-26 18:27:48.205677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.205685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.205692] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdca630) 00:27:22.288 [2024-07-26 18:27:48.205703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.288 [2024-07-26 18:27:48.205733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe18f80, cid 0, qid 0 00:27:22.288 [2024-07-26 18:27:48.205856] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.288 [2024-07-26 18:27:48.205869] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.288 [2024-07-26 18:27:48.205876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.205883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe18f80) on tqpair=0xdca630 00:27:22.288 [2024-07-26 18:27:48.205891] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:22.288 [2024-07-26 18:27:48.205908] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.205917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.205924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdca630) 00:27:22.288 [2024-07-26 18:27:48.205935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.288 [2024-07-26 18:27:48.205955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe18f80, cid 0, qid 0 00:27:22.288 [2024-07-26 18:27:48.206081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.288 [2024-07-26 18:27:48.206099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.288 [2024-07-26 18:27:48.206107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.206114] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe18f80) on tqpair=0xdca630 00:27:22.288 [2024-07-26 18:27:48.206121] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:22.288 [2024-07-26 18:27:48.206130] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:22.288 [2024-07-26 18:27:48.206144] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:22.288 [2024-07-26 18:27:48.206253] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:22.288 [2024-07-26 18:27:48.206261] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:22.288 [2024-07-26 18:27:48.206273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.206281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.206287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdca630) 00:27:22.288 [2024-07-26 18:27:48.206298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.288 [2024-07-26 18:27:48.206320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe18f80, cid 0, qid 0 00:27:22.288 [2024-07-26 18:27:48.206461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.288 [2024-07-26 18:27:48.206477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.288 [2024-07-26 18:27:48.206484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.206490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe18f80) on tqpair=0xdca630 00:27:22.288 [2024-07-26 18:27:48.206499] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:22.288 [2024-07-26 18:27:48.206516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.206525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.206532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdca630) 00:27:22.288 [2024-07-26 18:27:48.206542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.288 [2024-07-26 18:27:48.206563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe18f80, cid 0, qid 0 00:27:22.288 [2024-07-26 18:27:48.206692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.288 [2024-07-26 18:27:48.206707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.288 [2024-07-26 18:27:48.206714] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.206721] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe18f80) on tqpair=0xdca630 00:27:22.288 [2024-07-26 18:27:48.206729] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:22.288 [2024-07-26 18:27:48.206737] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:22.288 [2024-07-26 18:27:48.206751] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:22.288 [2024-07-26 18:27:48.206765] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:22.288 [2024-07-26 18:27:48.206778] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.206786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdca630) 00:27:22.288 [2024-07-26 18:27:48.206800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.288 [2024-07-26 18:27:48.206822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe18f80, cid 0, qid 0 00:27:22.288 [2024-07-26 18:27:48.206986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:22.288 [2024-07-26 18:27:48.207002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:22.288 [2024-07-26 18:27:48.207009] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.207015] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdca630): datao=0, datal=4096, cccid=0 00:27:22.288 [2024-07-26 18:27:48.207023] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe18f80) on tqpair(0xdca630): expected_datao=0, payload_size=4096 00:27:22.288 [2024-07-26 18:27:48.207032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.207042] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.207050] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.207078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.288 [2024-07-26 18:27:48.207090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.288 [2024-07-26 18:27:48.207097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.288 [2024-07-26 18:27:48.207104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe18f80) on tqpair=0xdca630 00:27:22.288 [2024-07-26 18:27:48.207115] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:22.289 [2024-07-26 18:27:48.207123] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:22.289 [2024-07-26 18:27:48.207131] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:22.289 [2024-07-26 18:27:48.207138] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:22.289 [2024-07-26 18:27:48.207146] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:22.289 [2024-07-26 18:27:48.207155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:22.289 [2024-07-26 18:27:48.207169] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:22.289 [2024-07-26 18:27:48.207185] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdca630) 00:27:22.289 [2024-07-26 18:27:48.207212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:22.289 [2024-07-26 18:27:48.207234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe18f80, cid 0, qid 0 00:27:22.289 [2024-07-26 18:27:48.207373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.289 [2024-07-26 18:27:48.207388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.289 [2024-07-26 18:27:48.207395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe18f80) on tqpair=0xdca630 00:27:22.289 [2024-07-26 18:27:48.207412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdca630) 00:27:22.289 [2024-07-26 18:27:48.207437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.289 [2024-07-26 18:27:48.207450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xdca630) 00:27:22.289 [2024-07-26 18:27:48.207474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.289 [2024-07-26 18:27:48.207483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xdca630) 00:27:22.289 [2024-07-26 18:27:48.207506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.289 [2024-07-26 18:27:48.207515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdca630) 00:27:22.289 [2024-07-26 18:27:48.207537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.289 [2024-07-26 18:27:48.207546] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:22.289 [2024-07-26 18:27:48.207565] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:22.289 [2024-07-26 18:27:48.207578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdca630) 00:27:22.289 [2024-07-26 18:27:48.207596] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.289 [2024-07-26 18:27:48.207618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe18f80, cid 0, qid 0 00:27:22.289 [2024-07-26 18:27:48.207630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19100, cid 1, qid 0 00:27:22.289 [2024-07-26 18:27:48.207638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19280, cid 2, qid 0 00:27:22.289 [2024-07-26 18:27:48.207646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19400, cid 3, qid 0 00:27:22.289 [2024-07-26 18:27:48.207654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19580, cid 4, qid 0 00:27:22.289 [2024-07-26 18:27:48.207808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.289 [2024-07-26 18:27:48.207823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.289 [2024-07-26 18:27:48.207831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207838] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19580) on tqpair=0xdca630 00:27:22.289 [2024-07-26 18:27:48.207846] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:22.289 [2024-07-26 18:27:48.207855] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:22.289 [2024-07-26 18:27:48.207873] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:22.289 [2024-07-26 18:27:48.207885] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:22.289 [2024-07-26 18:27:48.207896] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207904] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.207911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdca630) 00:27:22.289 [2024-07-26 18:27:48.207924] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:22.289 [2024-07-26 18:27:48.207946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19580, cid 4, qid 0 00:27:22.289 [2024-07-26 18:27:48.208091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.289 [2024-07-26 18:27:48.208106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.289 [2024-07-26 18:27:48.208114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.208121] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19580) on tqpair=0xdca630 00:27:22.289 [2024-07-26 18:27:48.208188] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:22.289 [2024-07-26 18:27:48.208208] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:22.289 [2024-07-26 18:27:48.208222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.208231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdca630) 00:27:22.289 [2024-07-26 18:27:48.208241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.289 [2024-07-26 18:27:48.208263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19580, cid 4, qid 0 00:27:22.289 [2024-07-26 18:27:48.208467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:22.289 [2024-07-26 18:27:48.208482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:22.289 [2024-07-26 18:27:48.208489] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.208496] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdca630): datao=0, datal=4096, cccid=4 00:27:22.289 [2024-07-26 18:27:48.208504] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe19580) on tqpair(0xdca630): expected_datao=0, payload_size=4096 00:27:22.289 [2024-07-26 18:27:48.208512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.208523] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.208531] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.208555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.289 [2024-07-26 18:27:48.208566] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.289 [2024-07-26 18:27:48.208573] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.208580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19580) on tqpair=0xdca630 00:27:22.289 [2024-07-26 18:27:48.208600] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:22.289 [2024-07-26 18:27:48.208616] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:22.289 [2024-07-26 18:27:48.208633] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:22.289 [2024-07-26 18:27:48.208647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.208655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdca630) 00:27:22.289 [2024-07-26 18:27:48.208665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.289 [2024-07-26 18:27:48.208687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19580, cid 4, qid 0 00:27:22.289 [2024-07-26 18:27:48.208835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:22.289 [2024-07-26 18:27:48.208848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:22.289 [2024-07-26 18:27:48.208860] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:22.289 [2024-07-26 18:27:48.208868] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdca630): datao=0, datal=4096, cccid=4 00:27:22.290 [2024-07-26 18:27:48.208876] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe19580) on tqpair(0xdca630): expected_datao=0, payload_size=4096 00:27:22.290 [2024-07-26 18:27:48.208884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.208901] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.208910] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.208996] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.290 [2024-07-26 18:27:48.209008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.290 [2024-07-26 18:27:48.209015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.209022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19580) on tqpair=0xdca630 00:27:22.290 [2024-07-26 18:27:48.209041] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:22.290 [2024-07-26 18:27:48.213068] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:22.290 [2024-07-26 18:27:48.213088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.213096] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdca630) 00:27:22.290 [2024-07-26 18:27:48.213108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.290 [2024-07-26 18:27:48.213131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19580, cid 4, qid 0 00:27:22.290 [2024-07-26 18:27:48.213276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:22.290 [2024-07-26 18:27:48.213289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:22.290 [2024-07-26 18:27:48.213296] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.213303] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdca630): datao=0, datal=4096, cccid=4 00:27:22.290 [2024-07-26 18:27:48.213310] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe19580) on tqpair(0xdca630): expected_datao=0, payload_size=4096 00:27:22.290 [2024-07-26 18:27:48.213318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.213335] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.213344] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.213434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.290 [2024-07-26 18:27:48.213446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.290 [2024-07-26 18:27:48.213453] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.213460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19580) on tqpair=0xdca630 00:27:22.290 [2024-07-26 18:27:48.213473] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:22.290 [2024-07-26 18:27:48.213488] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:22.290 [2024-07-26 18:27:48.213503] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:22.290 [2024-07-26 18:27:48.213516] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:22.290 [2024-07-26 18:27:48.213525] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:22.290 [2024-07-26 18:27:48.213537] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:22.290 [2024-07-26 18:27:48.213546] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:22.290 [2024-07-26 18:27:48.213554] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:22.290 [2024-07-26 18:27:48.213563] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:22.290 [2024-07-26 18:27:48.213582] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.213591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdca630) 00:27:22.290 [2024-07-26 18:27:48.213602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.290 [2024-07-26 18:27:48.213613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.213620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.213627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdca630) 00:27:22.290 [2024-07-26 18:27:48.213636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.290 [2024-07-26 18:27:48.213661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19580, cid 4, qid 0 00:27:22.290 [2024-07-26 18:27:48.213673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19700, cid 5, qid 0 00:27:22.290 [2024-07-26 18:27:48.213813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.290 [2024-07-26 18:27:48.213826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.290 [2024-07-26 18:27:48.213832] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.213839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19580) on tqpair=0xdca630 00:27:22.290 [2024-07-26 18:27:48.213849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.290 [2024-07-26 18:27:48.213859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.290 [2024-07-26 18:27:48.213866] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.213873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19700) on tqpair=0xdca630 00:27:22.290 [2024-07-26 18:27:48.213888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.213898] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdca630) 00:27:22.290 [2024-07-26 18:27:48.213908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.290 [2024-07-26 18:27:48.213929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19700, cid 5, qid 0 00:27:22.290 [2024-07-26 18:27:48.214055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.290 [2024-07-26 18:27:48.214075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.290 [2024-07-26 18:27:48.214083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.214090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19700) on tqpair=0xdca630 00:27:22.290 [2024-07-26 18:27:48.214106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.214115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdca630) 00:27:22.290 [2024-07-26 18:27:48.214126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.290 [2024-07-26 18:27:48.214146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19700, cid 5, qid 0 00:27:22.290 [2024-07-26 18:27:48.214281] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.290 [2024-07-26 18:27:48.214293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.290 [2024-07-26 18:27:48.214304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.214311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19700) on tqpair=0xdca630 00:27:22.290 [2024-07-26 18:27:48.214327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.214336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdca630) 00:27:22.290 [2024-07-26 18:27:48.214347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.290 [2024-07-26 18:27:48.214367] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19700, cid 5, qid 0 00:27:22.290 [2024-07-26 18:27:48.214494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.290 [2024-07-26 18:27:48.214508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.290 [2024-07-26 18:27:48.214515] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.214522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19700) on tqpair=0xdca630 00:27:22.290 [2024-07-26 18:27:48.214546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.214557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdca630) 00:27:22.290 [2024-07-26 18:27:48.214568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.290 [2024-07-26 18:27:48.214580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.214588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdca630) 00:27:22.290 [2024-07-26 18:27:48.214597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.290 [2024-07-26 18:27:48.214609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.214617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xdca630) 00:27:22.290 [2024-07-26 18:27:48.214626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.290 [2024-07-26 18:27:48.214638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.290 [2024-07-26 18:27:48.214646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdca630) 00:27:22.290 [2024-07-26 18:27:48.214656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.290 [2024-07-26 18:27:48.214678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19700, cid 5, qid 0 00:27:22.290 [2024-07-26 18:27:48.214689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19580, cid 4, qid 0 00:27:22.290 [2024-07-26 18:27:48.214697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19880, cid 6, qid 0 00:27:22.291 [2024-07-26 18:27:48.214705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19a00, cid 7, qid 0 00:27:22.291 [2024-07-26 18:27:48.214965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:22.291 [2024-07-26 18:27:48.214980] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:22.291 [2024-07-26 18:27:48.214987] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.214994] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdca630): datao=0, datal=8192, cccid=5 00:27:22.291 [2024-07-26 18:27:48.215002] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe19700) on tqpair(0xdca630): expected_datao=0, payload_size=8192 00:27:22.291 [2024-07-26 18:27:48.215010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215077] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215093] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215103] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:22.291 [2024-07-26 18:27:48.215112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:22.291 [2024-07-26 18:27:48.215119] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215126] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdca630): datao=0, datal=512, cccid=4 00:27:22.291 [2024-07-26 18:27:48.215133] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe19580) on tqpair(0xdca630): expected_datao=0, payload_size=512 00:27:22.291 [2024-07-26 18:27:48.215141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215150] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215158] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:22.291 [2024-07-26 18:27:48.215175] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:22.291 [2024-07-26 18:27:48.215182] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215189] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdca630): datao=0, datal=512, cccid=6 00:27:22.291 [2024-07-26 18:27:48.215197] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe19880) on tqpair(0xdca630): expected_datao=0, payload_size=512 00:27:22.291 [2024-07-26 18:27:48.215204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215213] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215221] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:22.291 [2024-07-26 18:27:48.215239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:22.291 [2024-07-26 18:27:48.215245] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215252] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdca630): datao=0, datal=4096, cccid=7 00:27:22.291 [2024-07-26 18:27:48.215260] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe19a00) on tqpair(0xdca630): expected_datao=0, payload_size=4096 00:27:22.291 [2024-07-26 18:27:48.215267] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215277] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215284] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.291 [2024-07-26 18:27:48.215306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.291 [2024-07-26 18:27:48.215312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215319] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19700) on tqpair=0xdca630 00:27:22.291 [2024-07-26 18:27:48.215337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.291 [2024-07-26 18:27:48.215349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.291 [2024-07-26 18:27:48.215355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19580) on tqpair=0xdca630 00:27:22.291 [2024-07-26 18:27:48.215392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.291 [2024-07-26 18:27:48.215403] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.291 [2024-07-26 18:27:48.215409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19880) on tqpair=0xdca630 00:27:22.291 [2024-07-26 18:27:48.215426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.291 [2024-07-26 18:27:48.215436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.291 [2024-07-26 18:27:48.215445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.291 [2024-07-26 18:27:48.215467] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19a00) on tqpair=0xdca630 00:27:22.291 ===================================================== 00:27:22.291 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:22.291 ===================================================== 00:27:22.291 Controller Capabilities/Features 00:27:22.291 ================================ 00:27:22.291 Vendor ID: 8086 00:27:22.291 Subsystem Vendor ID: 8086 00:27:22.291 Serial Number: SPDK00000000000001 00:27:22.291 Model Number: SPDK bdev Controller 00:27:22.291 Firmware Version: 24.09 00:27:22.291 Recommended Arb Burst: 6 00:27:22.291 IEEE OUI Identifier: e4 d2 5c 00:27:22.291 Multi-path I/O 00:27:22.291 May have multiple subsystem ports: Yes 00:27:22.291 May have multiple controllers: Yes 00:27:22.291 Associated with SR-IOV VF: No 00:27:22.291 Max Data Transfer Size: 131072 00:27:22.291 Max Number of Namespaces: 32 00:27:22.291 Max Number of I/O Queues: 127 00:27:22.291 NVMe Specification Version (VS): 1.3 00:27:22.291 NVMe Specification Version (Identify): 1.3 00:27:22.291 Maximum Queue Entries: 128 00:27:22.291 Contiguous Queues Required: Yes 00:27:22.291 Arbitration Mechanisms Supported 00:27:22.291 Weighted Round Robin: Not Supported 00:27:22.291 Vendor Specific: Not Supported 00:27:22.291 Reset Timeout: 15000 ms 00:27:22.291 Doorbell Stride: 4 bytes 00:27:22.291 NVM Subsystem Reset: Not Supported 00:27:22.291 Command Sets Supported 00:27:22.291 NVM Command Set: Supported 00:27:22.291 Boot Partition: Not Supported 00:27:22.291 Memory Page Size Minimum: 4096 bytes 00:27:22.291 Memory Page Size Maximum: 4096 bytes 00:27:22.291 Persistent Memory Region: Not Supported 00:27:22.291 Optional Asynchronous Events Supported 00:27:22.291 Namespace Attribute Notices: Supported 00:27:22.291 Firmware Activation Notices: Not Supported 00:27:22.291 ANA Change Notices: Not Supported 00:27:22.291 PLE Aggregate Log Change Notices: Not Supported 00:27:22.291 LBA Status Info Alert Notices: Not Supported 00:27:22.291 EGE Aggregate Log Change Notices: Not Supported 00:27:22.291 Normal NVM Subsystem Shutdown event: Not Supported 00:27:22.291 Zone Descriptor Change Notices: Not Supported 00:27:22.291 Discovery Log Change Notices: Not Supported 00:27:22.291 Controller Attributes 00:27:22.291 128-bit Host Identifier: Supported 00:27:22.291 Non-Operational Permissive Mode: Not Supported 00:27:22.291 NVM Sets: Not Supported 00:27:22.291 Read Recovery Levels: Not Supported 00:27:22.291 Endurance Groups: Not Supported 00:27:22.291 Predictable Latency Mode: Not Supported 00:27:22.291 Traffic Based Keep ALive: Not Supported 00:27:22.291 Namespace Granularity: Not Supported 00:27:22.291 SQ Associations: Not Supported 00:27:22.291 UUID List: Not Supported 00:27:22.291 Multi-Domain Subsystem: Not Supported 00:27:22.291 Fixed Capacity Management: Not Supported 00:27:22.291 Variable Capacity Management: Not Supported 00:27:22.291 Delete Endurance Group: Not Supported 00:27:22.291 Delete NVM Set: Not Supported 00:27:22.291 Extended LBA Formats Supported: Not Supported 00:27:22.291 Flexible Data Placement Supported: Not Supported 00:27:22.291 00:27:22.291 Controller Memory Buffer Support 00:27:22.291 ================================ 00:27:22.291 Supported: No 00:27:22.291 00:27:22.291 Persistent Memory Region Support 00:27:22.291 ================================ 00:27:22.291 Supported: No 00:27:22.291 00:27:22.291 Admin Command Set Attributes 00:27:22.291 ============================ 00:27:22.291 Security Send/Receive: Not Supported 00:27:22.291 Format NVM: Not Supported 00:27:22.291 Firmware Activate/Download: Not Supported 00:27:22.291 Namespace Management: Not Supported 00:27:22.291 Device Self-Test: Not Supported 00:27:22.291 Directives: Not Supported 00:27:22.291 NVMe-MI: Not Supported 00:27:22.291 Virtualization Management: Not Supported 00:27:22.291 Doorbell Buffer Config: Not Supported 00:27:22.291 Get LBA Status Capability: Not Supported 00:27:22.291 Command & Feature Lockdown Capability: Not Supported 00:27:22.291 Abort Command Limit: 4 00:27:22.291 Async Event Request Limit: 4 00:27:22.292 Number of Firmware Slots: N/A 00:27:22.292 Firmware Slot 1 Read-Only: N/A 00:27:22.292 Firmware Activation Without Reset: N/A 00:27:22.292 Multiple Update Detection Support: N/A 00:27:22.292 Firmware Update Granularity: No Information Provided 00:27:22.292 Per-Namespace SMART Log: No 00:27:22.292 Asymmetric Namespace Access Log Page: Not Supported 00:27:22.292 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:22.292 Command Effects Log Page: Supported 00:27:22.292 Get Log Page Extended Data: Supported 00:27:22.292 Telemetry Log Pages: Not Supported 00:27:22.292 Persistent Event Log Pages: Not Supported 00:27:22.292 Supported Log Pages Log Page: May Support 00:27:22.292 Commands Supported & Effects Log Page: Not Supported 00:27:22.292 Feature Identifiers & Effects Log Page:May Support 00:27:22.292 NVMe-MI Commands & Effects Log Page: May Support 00:27:22.292 Data Area 4 for Telemetry Log: Not Supported 00:27:22.292 Error Log Page Entries Supported: 128 00:27:22.292 Keep Alive: Supported 00:27:22.292 Keep Alive Granularity: 10000 ms 00:27:22.292 00:27:22.292 NVM Command Set Attributes 00:27:22.292 ========================== 00:27:22.292 Submission Queue Entry Size 00:27:22.292 Max: 64 00:27:22.292 Min: 64 00:27:22.292 Completion Queue Entry Size 00:27:22.292 Max: 16 00:27:22.292 Min: 16 00:27:22.292 Number of Namespaces: 32 00:27:22.292 Compare Command: Supported 00:27:22.292 Write Uncorrectable Command: Not Supported 00:27:22.292 Dataset Management Command: Supported 00:27:22.292 Write Zeroes Command: Supported 00:27:22.292 Set Features Save Field: Not Supported 00:27:22.292 Reservations: Supported 00:27:22.292 Timestamp: Not Supported 00:27:22.292 Copy: Supported 00:27:22.292 Volatile Write Cache: Present 00:27:22.292 Atomic Write Unit (Normal): 1 00:27:22.292 Atomic Write Unit (PFail): 1 00:27:22.292 Atomic Compare & Write Unit: 1 00:27:22.292 Fused Compare & Write: Supported 00:27:22.292 Scatter-Gather List 00:27:22.292 SGL Command Set: Supported 00:27:22.292 SGL Keyed: Supported 00:27:22.292 SGL Bit Bucket Descriptor: Not Supported 00:27:22.292 SGL Metadata Pointer: Not Supported 00:27:22.292 Oversized SGL: Not Supported 00:27:22.292 SGL Metadata Address: Not Supported 00:27:22.292 SGL Offset: Supported 00:27:22.292 Transport SGL Data Block: Not Supported 00:27:22.292 Replay Protected Memory Block: Not Supported 00:27:22.292 00:27:22.292 Firmware Slot Information 00:27:22.292 ========================= 00:27:22.292 Active slot: 1 00:27:22.292 Slot 1 Firmware Revision: 24.09 00:27:22.292 00:27:22.292 00:27:22.292 Commands Supported and Effects 00:27:22.292 ============================== 00:27:22.292 Admin Commands 00:27:22.292 -------------- 00:27:22.292 Get Log Page (02h): Supported 00:27:22.292 Identify (06h): Supported 00:27:22.292 Abort (08h): Supported 00:27:22.292 Set Features (09h): Supported 00:27:22.292 Get Features (0Ah): Supported 00:27:22.292 Asynchronous Event Request (0Ch): Supported 00:27:22.292 Keep Alive (18h): Supported 00:27:22.292 I/O Commands 00:27:22.292 ------------ 00:27:22.292 Flush (00h): Supported LBA-Change 00:27:22.292 Write (01h): Supported LBA-Change 00:27:22.292 Read (02h): Supported 00:27:22.292 Compare (05h): Supported 00:27:22.292 Write Zeroes (08h): Supported LBA-Change 00:27:22.292 Dataset Management (09h): Supported LBA-Change 00:27:22.292 Copy (19h): Supported LBA-Change 00:27:22.292 00:27:22.292 Error Log 00:27:22.292 ========= 00:27:22.292 00:27:22.292 Arbitration 00:27:22.292 =========== 00:27:22.292 Arbitration Burst: 1 00:27:22.292 00:27:22.292 Power Management 00:27:22.292 ================ 00:27:22.292 Number of Power States: 1 00:27:22.292 Current Power State: Power State #0 00:27:22.292 Power State #0: 00:27:22.292 Max Power: 0.00 W 00:27:22.292 Non-Operational State: Operational 00:27:22.292 Entry Latency: Not Reported 00:27:22.292 Exit Latency: Not Reported 00:27:22.292 Relative Read Throughput: 0 00:27:22.292 Relative Read Latency: 0 00:27:22.292 Relative Write Throughput: 0 00:27:22.292 Relative Write Latency: 0 00:27:22.292 Idle Power: Not Reported 00:27:22.292 Active Power: Not Reported 00:27:22.292 Non-Operational Permissive Mode: Not Supported 00:27:22.292 00:27:22.292 Health Information 00:27:22.292 ================== 00:27:22.292 Critical Warnings: 00:27:22.292 Available Spare Space: OK 00:27:22.292 Temperature: OK 00:27:22.292 Device Reliability: OK 00:27:22.292 Read Only: No 00:27:22.292 Volatile Memory Backup: OK 00:27:22.292 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:22.292 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:22.292 Available Spare: 0% 00:27:22.292 Available Spare Threshold: 0% 00:27:22.292 Life Percentage Used:[2024-07-26 18:27:48.215591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.292 [2024-07-26 18:27:48.215603] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdca630) 00:27:22.292 [2024-07-26 18:27:48.215614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.292 [2024-07-26 18:27:48.215636] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19a00, cid 7, qid 0 00:27:22.292 [2024-07-26 18:27:48.215822] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.292 [2024-07-26 18:27:48.215835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.292 [2024-07-26 18:27:48.215842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.292 [2024-07-26 18:27:48.215850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19a00) on tqpair=0xdca630 00:27:22.292 [2024-07-26 18:27:48.215891] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:22.292 [2024-07-26 18:27:48.215911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe18f80) on tqpair=0xdca630 00:27:22.292 [2024-07-26 18:27:48.215921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.292 [2024-07-26 18:27:48.215931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19100) on tqpair=0xdca630 00:27:22.292 [2024-07-26 18:27:48.215939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.292 [2024-07-26 18:27:48.215947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19280) on tqpair=0xdca630 00:27:22.292 [2024-07-26 18:27:48.215971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.292 [2024-07-26 18:27:48.215980] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19400) on tqpair=0xdca630 00:27:22.293 [2024-07-26 18:27:48.215987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.293 [2024-07-26 18:27:48.215999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdca630) 00:27:22.293 [2024-07-26 18:27:48.216024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.293 [2024-07-26 18:27:48.216068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19400, cid 3, qid 0 00:27:22.293 [2024-07-26 18:27:48.216224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.293 [2024-07-26 18:27:48.216236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.293 [2024-07-26 18:27:48.216244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19400) on tqpair=0xdca630 00:27:22.293 [2024-07-26 18:27:48.216262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdca630) 00:27:22.293 [2024-07-26 18:27:48.216287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.293 [2024-07-26 18:27:48.216312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19400, cid 3, qid 0 00:27:22.293 [2024-07-26 18:27:48.216459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.293 [2024-07-26 18:27:48.216471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.293 [2024-07-26 18:27:48.216481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19400) on tqpair=0xdca630 00:27:22.293 [2024-07-26 18:27:48.216497] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:22.293 [2024-07-26 18:27:48.216505] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:22.293 [2024-07-26 18:27:48.216521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216530] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdca630) 00:27:22.293 [2024-07-26 18:27:48.216547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.293 [2024-07-26 18:27:48.216568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19400, cid 3, qid 0 00:27:22.293 [2024-07-26 18:27:48.216701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.293 [2024-07-26 18:27:48.216714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.293 [2024-07-26 18:27:48.216721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19400) on tqpair=0xdca630 00:27:22.293 [2024-07-26 18:27:48.216744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdca630) 00:27:22.293 [2024-07-26 18:27:48.216771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.293 [2024-07-26 18:27:48.216791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19400, cid 3, qid 0 00:27:22.293 [2024-07-26 18:27:48.216914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.293 [2024-07-26 18:27:48.216926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.293 [2024-07-26 18:27:48.216933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19400) on tqpair=0xdca630 00:27:22.293 [2024-07-26 18:27:48.216956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.293 [2024-07-26 18:27:48.216972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdca630) 00:27:22.293 [2024-07-26 18:27:48.216983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.293 [2024-07-26 18:27:48.217003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19400, cid 3, qid 0 00:27:22.294 [2024-07-26 18:27:48.221088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.294 [2024-07-26 18:27:48.221105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.294 [2024-07-26 18:27:48.221112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.294 [2024-07-26 18:27:48.221119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19400) on tqpair=0xdca630 00:27:22.294 [2024-07-26 18:27:48.221150] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:22.294 [2024-07-26 18:27:48.221161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:22.294 [2024-07-26 18:27:48.221168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdca630) 00:27:22.294 [2024-07-26 18:27:48.221179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.294 [2024-07-26 18:27:48.221201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe19400, cid 3, qid 0 00:27:22.294 [2024-07-26 18:27:48.221364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:22.294 [2024-07-26 18:27:48.221378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:22.294 [2024-07-26 18:27:48.221385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:22.294 [2024-07-26 18:27:48.221392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe19400) on tqpair=0xdca630 00:27:22.294 [2024-07-26 18:27:48.221405] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:27:22.294 0% 00:27:22.294 Data Units Read: 0 00:27:22.294 Data Units Written: 0 00:27:22.294 Host Read Commands: 0 00:27:22.294 Host Write Commands: 0 00:27:22.294 Controller Busy Time: 0 minutes 00:27:22.294 Power Cycles: 0 00:27:22.294 Power On Hours: 0 hours 00:27:22.294 Unsafe Shutdowns: 0 00:27:22.294 Unrecoverable Media Errors: 0 00:27:22.294 Lifetime Error Log Entries: 0 00:27:22.294 Warning Temperature Time: 0 minutes 00:27:22.294 Critical Temperature Time: 0 minutes 00:27:22.294 00:27:22.294 Number of Queues 00:27:22.294 ================ 00:27:22.294 Number of I/O Submission Queues: 127 00:27:22.294 Number of I/O Completion Queues: 127 00:27:22.294 00:27:22.294 Active Namespaces 00:27:22.294 ================= 00:27:22.294 Namespace ID:1 00:27:22.294 Error Recovery Timeout: Unlimited 00:27:22.294 Command Set Identifier: NVM (00h) 00:27:22.294 Deallocate: Supported 00:27:22.294 Deallocated/Unwritten Error: Not Supported 00:27:22.294 Deallocated Read Value: Unknown 00:27:22.294 Deallocate in Write Zeroes: Not Supported 00:27:22.294 Deallocated Guard Field: 0xFFFF 00:27:22.294 Flush: Supported 00:27:22.294 Reservation: Supported 00:27:22.294 Namespace Sharing Capabilities: Multiple Controllers 00:27:22.294 Size (in LBAs): 131072 (0GiB) 00:27:22.294 Capacity (in LBAs): 131072 (0GiB) 00:27:22.294 Utilization (in LBAs): 131072 (0GiB) 00:27:22.294 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:22.294 EUI64: ABCDEF0123456789 00:27:22.294 UUID: 3029d869-3a04-4af8-aad5-dd34ab9f362a 00:27:22.294 Thin Provisioning: Not Supported 00:27:22.294 Per-NS Atomic Units: Yes 00:27:22.294 Atomic Boundary Size (Normal): 0 00:27:22.294 Atomic Boundary Size (PFail): 0 00:27:22.294 Atomic Boundary Offset: 0 00:27:22.294 Maximum Single Source Range Length: 65535 00:27:22.294 Maximum Copy Length: 65535 00:27:22.294 Maximum Source Range Count: 1 00:27:22.294 NGUID/EUI64 Never Reused: No 00:27:22.294 Namespace Write Protected: No 00:27:22.294 Number of LBA Formats: 1 00:27:22.294 Current LBA Format: LBA Format #00 00:27:22.294 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:22.294 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:22.294 rmmod nvme_tcp 00:27:22.294 rmmod nvme_fabrics 00:27:22.294 rmmod nvme_keyring 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1557344 ']' 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1557344 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1557344 ']' 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1557344 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1557344 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1557344' 00:27:22.294 killing process with pid 1557344 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1557344 00:27:22.294 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1557344 00:27:22.554 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:22.554 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:22.554 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:22.554 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:22.554 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:22.554 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.554 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.554 18:27:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.464 18:27:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:24.464 00:27:24.464 real 0m5.965s 00:27:24.464 user 0m6.974s 00:27:24.464 sys 0m1.861s 00:27:24.464 18:27:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:24.464 18:27:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:24.464 ************************************ 00:27:24.464 END TEST nvmf_identify 00:27:24.464 ************************************ 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.723 ************************************ 00:27:24.723 START TEST nvmf_perf 00:27:24.723 ************************************ 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:24.723 * Looking for test storage... 00:27:24.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.723 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:24.724 18:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:26.629 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:26.629 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.629 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:26.630 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:26.630 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:26.630 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:26.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:27:26.888 00:27:26.888 --- 10.0.0.2 ping statistics --- 00:27:26.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.888 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:27:26.888 00:27:26.888 --- 10.0.0.1 ping statistics --- 00:27:26.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.888 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1559483 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1559483 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1559483 ']' 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:26.888 18:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:26.888 [2024-07-26 18:27:52.892700] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:26.888 [2024-07-26 18:27:52.892781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.888 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.888 [2024-07-26 18:27:52.929576] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:26.888 [2024-07-26 18:27:52.961646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:27.147 [2024-07-26 18:27:53.052491] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.147 [2024-07-26 18:27:53.052552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.147 [2024-07-26 18:27:53.052569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:27.147 [2024-07-26 18:27:53.052582] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:27.147 [2024-07-26 18:27:53.052594] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.147 [2024-07-26 18:27:53.052682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.147 [2024-07-26 18:27:53.052731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:27.147 [2024-07-26 18:27:53.052895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:27.147 [2024-07-26 18:27:53.052898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.147 18:27:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:27.147 18:27:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:27:27.147 18:27:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:27.147 18:27:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:27.147 18:27:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:27.147 18:27:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.147 18:27:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:27.147 18:27:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:30.435 18:27:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:30.435 18:27:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:30.695 18:27:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:27:30.695 18:27:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:30.954 18:27:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:30.954 18:27:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:27:30.954 18:27:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:30.954 18:27:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:30.954 18:27:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:30.954 [2024-07-26 18:27:57.081577] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.213 18:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:31.213 18:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:31.213 18:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:31.471 18:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:31.471 18:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:31.729 18:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.987 [2024-07-26 18:27:58.061291] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.987 18:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:32.245 18:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:27:32.245 18:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:32.245 18:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:32.245 18:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:33.622 Initializing NVMe Controllers 00:27:33.622 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:27:33.622 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:27:33.622 Initialization complete. Launching workers. 00:27:33.622 ======================================================== 00:27:33.622 Latency(us) 00:27:33.622 Device Information : IOPS MiB/s Average min max 00:27:33.622 PCIE (0000:88:00.0) NSID 1 from core 0: 85253.50 333.02 374.92 43.79 4404.27 00:27:33.622 ======================================================== 00:27:33.622 Total : 85253.50 333.02 374.92 43.79 4404.27 00:27:33.622 00:27:33.622 18:27:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:33.622 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.035 Initializing NVMe Controllers 00:27:35.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:35.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:35.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:35.035 Initialization complete. Launching workers. 00:27:35.035 ======================================================== 00:27:35.035 Latency(us) 00:27:35.035 Device Information : IOPS MiB/s Average min max 00:27:35.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 69.00 0.27 14762.34 191.22 44799.16 00:27:35.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.00 0.20 20788.03 7947.02 50918.15 00:27:35.035 ======================================================== 00:27:35.035 Total : 119.00 0.46 17294.14 191.22 50918.15 00:27:35.035 00:27:35.035 18:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:35.035 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.407 Initializing NVMe Controllers 00:27:36.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:36.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:36.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:36.407 Initialization complete. Launching workers. 00:27:36.407 ======================================================== 00:27:36.407 Latency(us) 00:27:36.407 Device Information : IOPS MiB/s Average min max 00:27:36.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8146.66 31.82 3927.84 561.01 7643.09 00:27:36.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3898.00 15.23 8242.17 5358.99 16527.28 00:27:36.407 ======================================================== 00:27:36.407 Total : 12044.66 47.05 5324.08 561.01 16527.28 00:27:36.407 00:27:36.407 18:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:36.407 18:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:36.407 18:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:36.407 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.937 Initializing NVMe Controllers 00:27:38.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.937 Controller IO queue size 128, less than required. 00:27:38.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.937 Controller IO queue size 128, less than required. 00:27:38.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:38.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:38.937 Initialization complete. Launching workers. 00:27:38.937 ======================================================== 00:27:38.937 Latency(us) 00:27:38.937 Device Information : IOPS MiB/s Average min max 00:27:38.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 951.55 237.89 139426.54 61149.75 208547.89 00:27:38.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 551.24 137.81 243632.32 133919.84 338285.98 00:27:38.937 ======================================================== 00:27:38.937 Total : 1502.80 375.70 177650.34 61149.75 338285.98 00:27:38.937 00:27:38.937 18:28:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:38.937 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.937 No valid NVMe controllers or AIO or URING devices found 00:27:38.937 Initializing NVMe Controllers 00:27:38.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.937 Controller IO queue size 128, less than required. 00:27:38.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.937 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:38.937 Controller IO queue size 128, less than required. 00:27:38.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.937 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:38.937 WARNING: Some requested NVMe devices were skipped 00:27:38.937 18:28:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:38.937 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.471 Initializing NVMe Controllers 00:27:41.471 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.471 Controller IO queue size 128, less than required. 00:27:41.471 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:41.471 Controller IO queue size 128, less than required. 00:27:41.471 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:41.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:41.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:41.471 Initialization complete. Launching workers. 00:27:41.471 00:27:41.471 ==================== 00:27:41.471 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:41.471 TCP transport: 00:27:41.471 polls: 29908 00:27:41.471 idle_polls: 9659 00:27:41.471 sock_completions: 20249 00:27:41.471 nvme_completions: 4031 00:27:41.471 submitted_requests: 6088 00:27:41.471 queued_requests: 1 00:27:41.471 00:27:41.471 ==================== 00:27:41.471 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:41.471 TCP transport: 00:27:41.471 polls: 31669 00:27:41.471 idle_polls: 10566 00:27:41.471 sock_completions: 21103 00:27:41.471 nvme_completions: 3345 00:27:41.471 submitted_requests: 5018 00:27:41.471 queued_requests: 1 00:27:41.471 ======================================================== 00:27:41.471 Latency(us) 00:27:41.471 Device Information : IOPS MiB/s Average min max 00:27:41.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1007.50 251.87 130909.06 76278.38 177275.61 00:27:41.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 836.00 209.00 158480.25 62249.66 218629.92 00:27:41.471 ======================================================== 00:27:41.471 Total : 1843.50 460.87 143412.19 62249.66 218629.92 00:27:41.471 00:27:41.471 18:28:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:41.471 18:28:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.728 18:28:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:41.728 18:28:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:27:41.728 18:28:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=5b76d9a0-6201-435d-ace4-b89daeb8ec50 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 5b76d9a0-6201-435d-ace4-b89daeb8ec50 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=5b76d9a0-6201-435d-ace4-b89daeb8ec50 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:45.915 { 00:27:45.915 "uuid": "5b76d9a0-6201-435d-ace4-b89daeb8ec50", 00:27:45.915 "name": "lvs_0", 00:27:45.915 "base_bdev": "Nvme0n1", 00:27:45.915 "total_data_clusters": 238234, 00:27:45.915 "free_clusters": 238234, 00:27:45.915 "block_size": 512, 00:27:45.915 "cluster_size": 4194304 00:27:45.915 } 00:27:45.915 ]' 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5b76d9a0-6201-435d-ace4-b89daeb8ec50") .free_clusters' 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5b76d9a0-6201-435d-ace4-b89daeb8ec50") .cluster_size' 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:27:45.915 952936 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5b76d9a0-6201-435d-ace4-b89daeb8ec50 lbd_0 20480 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=e0482e52-d02a-4b7d-9170-1808b9b32357 00:27:45.915 18:28:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e0482e52-d02a-4b7d-9170-1808b9b32357 lvs_n_0 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=cac9eaaf-6e3d-4578-8c05-2ca0c1e11d6c 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb cac9eaaf-6e3d-4578-8c05-2ca0c1e11d6c 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=cac9eaaf-6e3d-4578-8c05-2ca0c1e11d6c 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:46.850 { 00:27:46.850 "uuid": "5b76d9a0-6201-435d-ace4-b89daeb8ec50", 00:27:46.850 "name": "lvs_0", 00:27:46.850 "base_bdev": "Nvme0n1", 00:27:46.850 "total_data_clusters": 238234, 00:27:46.850 "free_clusters": 233114, 00:27:46.850 "block_size": 512, 00:27:46.850 "cluster_size": 4194304 00:27:46.850 }, 00:27:46.850 { 00:27:46.850 "uuid": "cac9eaaf-6e3d-4578-8c05-2ca0c1e11d6c", 00:27:46.850 "name": "lvs_n_0", 00:27:46.850 "base_bdev": "e0482e52-d02a-4b7d-9170-1808b9b32357", 00:27:46.850 "total_data_clusters": 5114, 00:27:46.850 "free_clusters": 5114, 00:27:46.850 "block_size": 512, 00:27:46.850 "cluster_size": 4194304 00:27:46.850 } 00:27:46.850 ]' 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="cac9eaaf-6e3d-4578-8c05-2ca0c1e11d6c") .free_clusters' 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="cac9eaaf-6e3d-4578-8c05-2ca0c1e11d6c") .cluster_size' 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:27:46.850 20456 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:46.850 18:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cac9eaaf-6e3d-4578-8c05-2ca0c1e11d6c lbd_nest_0 20456 00:27:47.108 18:28:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=bd3f4b15-cf6e-4432-a93b-d987215e2aae 00:27:47.108 18:28:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:47.365 18:28:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:47.365 18:28:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bd3f4b15-cf6e-4432-a93b-d987215e2aae 00:27:47.622 18:28:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.881 18:28:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:47.881 18:28:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:47.881 18:28:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:47.881 18:28:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:47.881 18:28:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:47.881 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.087 Initializing NVMe Controllers 00:28:00.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:00.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:00.087 Initialization complete. Launching workers. 00:28:00.087 ======================================================== 00:28:00.087 Latency(us) 00:28:00.087 Device Information : IOPS MiB/s Average min max 00:28:00.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.48 0.02 21598.18 224.01 48703.47 00:28:00.087 ======================================================== 00:28:00.087 Total : 46.48 0.02 21598.18 224.01 48703.47 00:28:00.087 00:28:00.087 18:28:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:00.087 18:28:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:00.087 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.093 Initializing NVMe Controllers 00:28:10.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:10.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:10.094 Initialization complete. Launching workers. 00:28:10.094 ======================================================== 00:28:10.094 Latency(us) 00:28:10.094 Device Information : IOPS MiB/s Average min max 00:28:10.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.17 10.02 12472.69 4983.93 47886.86 00:28:10.094 ======================================================== 00:28:10.094 Total : 80.17 10.02 12472.69 4983.93 47886.86 00:28:10.094 00:28:10.094 18:28:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:10.094 18:28:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:10.094 18:28:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:10.094 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.068 Initializing NVMe Controllers 00:28:20.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:20.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:20.068 Initialization complete. Launching workers. 00:28:20.068 ======================================================== 00:28:20.068 Latency(us) 00:28:20.068 Device Information : IOPS MiB/s Average min max 00:28:20.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7071.86 3.45 4524.49 301.63 11957.75 00:28:20.068 ======================================================== 00:28:20.068 Total : 7071.86 3.45 4524.49 301.63 11957.75 00:28:20.068 00:28:20.068 18:28:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:20.068 18:28:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:20.068 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.043 Initializing NVMe Controllers 00:28:30.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:30.043 Initialization complete. Launching workers. 00:28:30.043 ======================================================== 00:28:30.043 Latency(us) 00:28:30.043 Device Information : IOPS MiB/s Average min max 00:28:30.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1693.50 211.69 18921.77 1309.63 40354.34 00:28:30.043 ======================================================== 00:28:30.043 Total : 1693.50 211.69 18921.77 1309.63 40354.34 00:28:30.043 00:28:30.043 18:28:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:30.043 18:28:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:30.043 18:28:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:30.043 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.011 Initializing NVMe Controllers 00:28:40.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.011 Controller IO queue size 128, less than required. 00:28:40.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:40.011 Initialization complete. Launching workers. 00:28:40.011 ======================================================== 00:28:40.011 Latency(us) 00:28:40.011 Device Information : IOPS MiB/s Average min max 00:28:40.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11667.12 5.70 10974.24 1546.38 25880.04 00:28:40.011 ======================================================== 00:28:40.011 Total : 11667.12 5.70 10974.24 1546.38 25880.04 00:28:40.011 00:28:40.011 18:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:40.011 18:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:40.011 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.981 Initializing NVMe Controllers 00:28:49.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.981 Controller IO queue size 128, less than required. 00:28:49.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:49.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:49.981 Initialization complete. Launching workers. 00:28:49.981 ======================================================== 00:28:49.981 Latency(us) 00:28:49.981 Device Information : IOPS MiB/s Average min max 00:28:49.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1205.99 150.75 106658.44 15054.56 220389.16 00:28:49.981 ======================================================== 00:28:49.981 Total : 1205.99 150.75 106658.44 15054.56 220389.16 00:28:49.981 00:28:49.981 18:29:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:50.239 18:29:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bd3f4b15-cf6e-4432-a93b-d987215e2aae 00:28:51.183 18:29:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:51.183 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e0482e52-d02a-4b7d-9170-1808b9b32357 00:28:51.440 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:51.699 rmmod nvme_tcp 00:28:51.699 rmmod nvme_fabrics 00:28:51.699 rmmod nvme_keyring 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1559483 ']' 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1559483 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1559483 ']' 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1559483 00:28:51.699 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:28:51.958 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:51.958 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1559483 00:28:51.958 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:51.958 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:51.958 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1559483' 00:28:51.958 killing process with pid 1559483 00:28:51.958 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1559483 00:28:51.958 18:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1559483 00:28:53.863 18:29:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:53.863 18:29:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:53.863 18:29:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:53.863 18:29:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:53.863 18:29:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:53.863 18:29:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.863 18:29:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.863 18:29:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.770 18:29:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:55.770 00:28:55.770 real 1m30.875s 00:28:55.770 user 5m35.471s 00:28:55.770 sys 0m15.445s 00:28:55.770 18:29:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:55.770 18:29:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:55.770 ************************************ 00:28:55.770 END TEST nvmf_perf 00:28:55.770 ************************************ 00:28:55.770 18:29:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.771 ************************************ 00:28:55.771 START TEST nvmf_fio_host 00:28:55.771 ************************************ 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:55.771 * Looking for test storage... 00:28:55.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:55.771 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:55.772 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:55.772 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.772 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.772 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.772 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:55.772 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:55.772 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:55.772 18:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:57.674 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:57.674 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:57.674 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:57.674 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:57.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:28:57.674 00:28:57.674 --- 10.0.0.2 ping statistics --- 00:28:57.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.674 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:57.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:28:57.674 00:28:57.674 --- 10.0.0.1 ping statistics --- 00:28:57.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.674 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1571444 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1571444 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1571444 ']' 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:57.674 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.674 [2024-07-26 18:29:23.628285] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:28:57.674 [2024-07-26 18:29:23.628361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.674 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.674 [2024-07-26 18:29:23.665375] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:57.674 [2024-07-26 18:29:23.697362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:57.674 [2024-07-26 18:29:23.788252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.674 [2024-07-26 18:29:23.788314] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.674 [2024-07-26 18:29:23.788331] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.674 [2024-07-26 18:29:23.788344] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.674 [2024-07-26 18:29:23.788356] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.674 [2024-07-26 18:29:23.788446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.674 [2024-07-26 18:29:23.788518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.674 [2024-07-26 18:29:23.788610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:57.674 [2024-07-26 18:29:23.788612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.933 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:57.933 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:28:57.933 18:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:58.191 [2024-07-26 18:29:24.150232] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.191 18:29:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:58.191 18:29:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:58.191 18:29:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.191 18:29:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:58.449 Malloc1 00:28:58.449 18:29:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:58.707 18:29:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:58.965 18:29:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.223 [2024-07-26 18:29:25.184729] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.223 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:59.481 18:29:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:59.740 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:59.740 fio-3.35 00:28:59.740 Starting 1 thread 00:28:59.740 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.269 00:29:02.269 test: (groupid=0, jobs=1): err= 0: pid=1571800: Fri Jul 26 18:29:27 2024 00:29:02.269 read: IOPS=9066, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2006msec) 00:29:02.269 slat (usec): min=2, max=114, avg= 2.64, stdev= 1.55 00:29:02.269 clat (usec): min=3070, max=13243, avg=7818.28, stdev=597.04 00:29:02.269 lat (usec): min=3108, max=13246, avg=7820.92, stdev=596.95 00:29:02.269 clat percentiles (usec): 00:29:02.269 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:29:02.269 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:29:02.269 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8455], 95.00th=[ 8717], 00:29:02.269 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[11469], 99.95th=[12518], 00:29:02.269 | 99.99th=[13173] 00:29:02.269 bw ( KiB/s): min=35712, max=36728, per=99.85%, avg=36212.00, stdev=438.01, samples=4 00:29:02.269 iops : min= 8928, max= 9182, avg=9053.00, stdev=109.50, samples=4 00:29:02.269 write: IOPS=9075, BW=35.5MiB/s (37.2MB/s)(71.1MiB/2006msec); 0 zone resets 00:29:02.269 slat (nsec): min=2221, max=89342, avg=2753.31, stdev=1313.07 00:29:02.269 clat (usec): min=1235, max=12167, avg=6259.44, stdev=509.00 00:29:02.269 lat (usec): min=1242, max=12169, avg=6262.19, stdev=508.93 00:29:02.269 clat percentiles (usec): 00:29:02.269 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:29:02.269 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:29:02.269 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 6980], 00:29:02.269 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[ 9372], 99.95th=[10421], 00:29:02.269 | 99.99th=[11469] 00:29:02.269 bw ( KiB/s): min=36096, max=36496, per=100.00%, avg=36320.00, stdev=189.20, samples=4 00:29:02.269 iops : min= 9024, max= 9124, avg=9080.00, stdev=47.30, samples=4 00:29:02.269 lat (msec) : 2=0.02%, 4=0.12%, 10=99.73%, 20=0.13% 00:29:02.269 cpu : usr=59.20%, sys=34.36%, ctx=52, majf=0, minf=40 00:29:02.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:02.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:02.269 issued rwts: total=18188,18205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:02.269 00:29:02.269 Run status group 0 (all jobs): 00:29:02.269 READ: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.5MB), run=2006-2006msec 00:29:02.269 WRITE: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=71.1MiB (74.6MB), run=2006-2006msec 00:29:02.269 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:02.269 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:02.269 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:02.270 18:29:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:02.270 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:02.270 fio-3.35 00:29:02.270 Starting 1 thread 00:29:02.270 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.796 00:29:04.796 test: (groupid=0, jobs=1): err= 0: pid=1572130: Fri Jul 26 18:29:30 2024 00:29:04.796 read: IOPS=7537, BW=118MiB/s (123MB/s)(237MiB/2012msec) 00:29:04.796 slat (usec): min=2, max=106, avg= 3.68, stdev= 1.68 00:29:04.796 clat (usec): min=2916, max=25433, avg=9836.19, stdev=2651.77 00:29:04.796 lat (usec): min=2919, max=25437, avg=9839.86, stdev=2651.84 00:29:04.796 clat percentiles (usec): 00:29:04.796 | 1.00th=[ 4817], 5.00th=[ 5932], 10.00th=[ 6652], 20.00th=[ 7570], 00:29:04.796 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10159], 00:29:04.796 | 70.00th=[10945], 80.00th=[11994], 90.00th=[13173], 95.00th=[14615], 00:29:04.796 | 99.00th=[17695], 99.50th=[19006], 99.90th=[20055], 99.95th=[21103], 00:29:04.796 | 99.99th=[22938] 00:29:04.796 bw ( KiB/s): min=51712, max=73344, per=51.42%, avg=62016.00, stdev=8988.75, samples=4 00:29:04.796 iops : min= 3232, max= 4584, avg=3876.00, stdev=561.80, samples=4 00:29:04.796 write: IOPS=4379, BW=68.4MiB/s (71.7MB/s)(127MiB/1857msec); 0 zone resets 00:29:04.796 slat (usec): min=30, max=125, avg=33.28, stdev= 4.47 00:29:04.796 clat (usec): min=5482, max=25654, avg=12631.76, stdev=3605.89 00:29:04.796 lat (usec): min=5514, max=25686, avg=12665.04, stdev=3606.35 00:29:04.796 clat percentiles (usec): 00:29:04.796 | 1.00th=[ 7373], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:29:04.796 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11469], 60.00th=[12387], 00:29:04.796 | 70.00th=[13829], 80.00th=[15795], 90.00th=[18482], 95.00th=[19792], 00:29:04.796 | 99.00th=[22676], 99.50th=[23462], 99.90th=[25297], 99.95th=[25560], 00:29:04.796 | 99.99th=[25560] 00:29:04.796 bw ( KiB/s): min=53056, max=76224, per=91.97%, avg=64440.00, stdev=9654.05, samples=4 00:29:04.796 iops : min= 3316, max= 4764, avg=4027.50, stdev=603.38, samples=4 00:29:04.796 lat (msec) : 4=0.12%, 10=45.49%, 20=52.78%, 50=1.62% 00:29:04.796 cpu : usr=69.97%, sys=25.11%, ctx=38, majf=0, minf=58 00:29:04.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:29:04.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:04.796 issued rwts: total=15166,8132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:04.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:04.796 00:29:04.796 Run status group 0 (all jobs): 00:29:04.796 READ: bw=118MiB/s (123MB/s), 118MiB/s-118MiB/s (123MB/s-123MB/s), io=237MiB (248MB), run=2012-2012msec 00:29:04.796 WRITE: bw=68.4MiB/s (71.7MB/s), 68.4MiB/s-68.4MiB/s (71.7MB/s-71.7MB/s), io=127MiB (133MB), run=1857-1857msec 00:29:04.796 18:29:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:04.796 18:29:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:04.796 18:29:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:04.796 18:29:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:04.796 18:29:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:04.796 18:29:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:29:04.796 18:29:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:04.796 18:29:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:04.796 18:29:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:04.796 18:29:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:04.796 18:29:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:29:04.796 18:29:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:29:08.081 Nvme0n1 00:29:08.081 18:29:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:11.390 18:29:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=7b8ca4f8-f605-4596-8cb9-716234833788 00:29:11.390 18:29:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 7b8ca4f8-f605-4596-8cb9-716234833788 00:29:11.390 18:29:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=7b8ca4f8-f605-4596-8cb9-716234833788 00:29:11.390 18:29:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:11.390 18:29:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:11.390 18:29:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:11.390 18:29:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:11.390 18:29:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:11.390 { 00:29:11.390 "uuid": "7b8ca4f8-f605-4596-8cb9-716234833788", 00:29:11.390 "name": "lvs_0", 00:29:11.390 "base_bdev": "Nvme0n1", 00:29:11.390 "total_data_clusters": 930, 00:29:11.390 "free_clusters": 930, 00:29:11.390 "block_size": 512, 00:29:11.390 "cluster_size": 1073741824 00:29:11.390 } 00:29:11.390 ]' 00:29:11.390 18:29:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="7b8ca4f8-f605-4596-8cb9-716234833788") .free_clusters' 00:29:11.390 18:29:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:29:11.390 18:29:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="7b8ca4f8-f605-4596-8cb9-716234833788") .cluster_size' 00:29:11.390 18:29:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:29:11.390 18:29:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:29:11.390 18:29:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:29:11.390 952320 00:29:11.390 18:29:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:11.390 9ac22719-18a9-44a9-b480-25690df30eaf 00:29:11.662 18:29:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:11.663 18:29:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:11.919 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:12.176 18:29:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:12.436 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:12.436 fio-3.35 00:29:12.436 Starting 1 thread 00:29:12.436 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.962 00:29:14.962 test: (groupid=0, jobs=1): err= 0: pid=1573410: Fri Jul 26 18:29:40 2024 00:29:14.962 read: IOPS=5963, BW=23.3MiB/s (24.4MB/s)(47.7MiB/2049msec) 00:29:14.962 slat (usec): min=2, max=160, avg= 2.71, stdev= 2.42 00:29:14.962 clat (usec): min=848, max=171531, avg=11830.16, stdev=12052.58 00:29:14.962 lat (usec): min=852, max=171570, avg=11832.87, stdev=12052.93 00:29:14.962 clat percentiles (msec): 00:29:14.962 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:29:14.962 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:29:14.962 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:29:14.962 | 99.00th=[ 52], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:29:14.962 | 99.99th=[ 171] 00:29:14.962 bw ( KiB/s): min=16832, max=26960, per=100.00%, avg=24290.00, stdev=4974.27, samples=4 00:29:14.962 iops : min= 4208, max= 6740, avg=6072.50, stdev=1243.57, samples=4 00:29:14.962 write: IOPS=5942, BW=23.2MiB/s (24.3MB/s)(47.6MiB/2049msec); 0 zone resets 00:29:14.962 slat (usec): min=2, max=138, avg= 2.80, stdev= 1.86 00:29:14.962 clat (usec): min=337, max=169369, avg=9460.22, stdev=11244.47 00:29:14.962 lat (usec): min=340, max=169376, avg=9463.02, stdev=11244.83 00:29:14.962 clat percentiles (msec): 00:29:14.962 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:29:14.962 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:29:14.962 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:29:14.963 | 99.00th=[ 11], 99.50th=[ 57], 99.90th=[ 169], 99.95th=[ 169], 00:29:14.963 | 99.99th=[ 169] 00:29:14.963 bw ( KiB/s): min=17832, max=26560, per=100.00%, avg=24250.00, stdev=4280.37, samples=4 00:29:14.963 iops : min= 4458, max= 6640, avg=6062.50, stdev=1070.09, samples=4 00:29:14.963 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:14.963 lat (msec) : 2=0.03%, 4=0.14%, 10=57.81%, 20=40.95%, 50=0.07% 00:29:14.963 lat (msec) : 100=0.45%, 250=0.52% 00:29:14.963 cpu : usr=55.37%, sys=40.09%, ctx=88, majf=0, minf=40 00:29:14.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:14.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:14.963 issued rwts: total=12220,12177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:14.963 00:29:14.963 Run status group 0 (all jobs): 00:29:14.963 READ: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=47.7MiB (50.1MB), run=2049-2049msec 00:29:14.963 WRITE: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=47.6MiB (49.9MB), run=2049-2049msec 00:29:14.963 18:29:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:14.963 18:29:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:16.338 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=3f5a15fe-bf47-468e-ad83-6477efea071c 00:29:16.338 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 3f5a15fe-bf47-468e-ad83-6477efea071c 00:29:16.338 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=3f5a15fe-bf47-468e-ad83-6477efea071c 00:29:16.338 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:16.338 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:16.338 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:16.338 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:16.596 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:16.596 { 00:29:16.596 "uuid": "7b8ca4f8-f605-4596-8cb9-716234833788", 00:29:16.596 "name": "lvs_0", 00:29:16.596 "base_bdev": "Nvme0n1", 00:29:16.596 "total_data_clusters": 930, 00:29:16.596 "free_clusters": 0, 00:29:16.596 "block_size": 512, 00:29:16.596 "cluster_size": 1073741824 00:29:16.596 }, 00:29:16.596 { 00:29:16.596 "uuid": "3f5a15fe-bf47-468e-ad83-6477efea071c", 00:29:16.596 "name": "lvs_n_0", 00:29:16.596 "base_bdev": "9ac22719-18a9-44a9-b480-25690df30eaf", 00:29:16.596 "total_data_clusters": 237847, 00:29:16.596 "free_clusters": 237847, 00:29:16.596 "block_size": 512, 00:29:16.596 "cluster_size": 4194304 00:29:16.596 } 00:29:16.596 ]' 00:29:16.596 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3f5a15fe-bf47-468e-ad83-6477efea071c") .free_clusters' 00:29:16.596 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:29:16.596 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3f5a15fe-bf47-468e-ad83-6477efea071c") .cluster_size' 00:29:16.596 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:16.596 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:29:16.596 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:29:16.596 951388 00:29:16.596 18:29:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:17.164 1a8cac44-deed-48e1-bc11-d46c26a44a47 00:29:17.164 18:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:17.421 18:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:17.678 18:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:17.936 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:18.194 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:18.194 fio-3.35 00:29:18.194 Starting 1 thread 00:29:18.194 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.720 00:29:20.720 test: (groupid=0, jobs=1): err= 0: pid=1574147: Fri Jul 26 18:29:46 2024 00:29:20.720 read: IOPS=5752, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2010msec) 00:29:20.720 slat (nsec): min=1955, max=181930, avg=2673.16, stdev=2647.39 00:29:20.720 clat (usec): min=4596, max=19867, avg=12291.88, stdev=1008.99 00:29:20.720 lat (usec): min=4603, max=19869, avg=12294.56, stdev=1008.84 00:29:20.720 clat percentiles (usec): 00:29:20.720 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:29:20.720 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:29:20.720 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:29:20.720 | 99.00th=[14484], 99.50th=[14877], 99.90th=[17433], 99.95th=[18482], 00:29:20.720 | 99.99th=[19792] 00:29:20.720 bw ( KiB/s): min=21584, max=23576, per=99.87%, avg=22982.00, stdev=942.20, samples=4 00:29:20.720 iops : min= 5396, max= 5894, avg=5745.50, stdev=235.55, samples=4 00:29:20.720 write: IOPS=5740, BW=22.4MiB/s (23.5MB/s)(45.1MiB/2010msec); 0 zone resets 00:29:20.720 slat (usec): min=2, max=124, avg= 2.81, stdev= 1.90 00:29:20.720 clat (usec): min=2285, max=18484, avg=9773.88, stdev=921.38 00:29:20.720 lat (usec): min=2294, max=18487, avg=9776.69, stdev=921.31 00:29:20.720 clat percentiles (usec): 00:29:20.720 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:29:20.720 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:29:20.720 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:29:20.720 | 99.00th=[11731], 99.50th=[12125], 99.90th=[16319], 99.95th=[17433], 00:29:20.720 | 99.99th=[18482] 00:29:20.720 bw ( KiB/s): min=22616, max=23224, per=100.00%, avg=22966.00, stdev=284.00, samples=4 00:29:20.720 iops : min= 5654, max= 5806, avg=5741.50, stdev=71.00, samples=4 00:29:20.720 lat (msec) : 4=0.05%, 10=31.31%, 20=68.64% 00:29:20.720 cpu : usr=54.31%, sys=41.21%, ctx=86, majf=0, minf=40 00:29:20.720 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:20.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:20.720 issued rwts: total=11563,11539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:20.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:20.720 00:29:20.720 Run status group 0 (all jobs): 00:29:20.721 READ: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2010-2010msec 00:29:20.721 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.1MiB (47.3MB), run=2010-2010msec 00:29:20.721 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:20.978 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:20.978 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:25.168 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:25.168 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:28.452 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:28.452 18:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:30.352 18:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:30.352 18:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:30.352 18:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:30.352 18:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:30.352 18:29:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:30.352 rmmod nvme_tcp 00:29:30.352 rmmod nvme_fabrics 00:29:30.352 rmmod nvme_keyring 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1571444 ']' 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1571444 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1571444 ']' 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1571444 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1571444 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1571444' 00:29:30.352 killing process with pid 1571444 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1571444 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1571444 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.352 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.258 18:29:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:32.258 00:29:32.258 real 0m36.792s 00:29:32.258 user 2m19.815s 00:29:32.258 sys 0m7.479s 00:29:32.258 18:29:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:32.258 18:29:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.258 ************************************ 00:29:32.258 END TEST nvmf_fio_host 00:29:32.258 ************************************ 00:29:32.258 18:29:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:32.258 18:29:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:32.258 18:29:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:32.258 18:29:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.518 ************************************ 00:29:32.518 START TEST nvmf_failover 00:29:32.518 ************************************ 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:32.518 * Looking for test storage... 00:29:32.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:32.518 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.519 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.519 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.519 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:32.519 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:32.519 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:32.519 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:34.448 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:34.448 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:34.448 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:34.448 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.448 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:34.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:29:34.449 00:29:34.449 --- 10.0.0.2 ping statistics --- 00:29:34.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.449 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:29:34.449 00:29:34.449 --- 10.0.0.1 ping statistics --- 00:29:34.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.449 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1577544 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1577544 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1577544 ']' 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:34.449 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:34.707 [2024-07-26 18:30:00.616417] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:34.707 [2024-07-26 18:30:00.616493] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.707 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.707 [2024-07-26 18:30:00.654772] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:34.707 [2024-07-26 18:30:00.686779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:34.707 [2024-07-26 18:30:00.781348] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.707 [2024-07-26 18:30:00.781404] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.707 [2024-07-26 18:30:00.781418] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.707 [2024-07-26 18:30:00.781430] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.707 [2024-07-26 18:30:00.781441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.707 [2024-07-26 18:30:00.781526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.707 [2024-07-26 18:30:00.781595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.707 [2024-07-26 18:30:00.781589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.965 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:34.965 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:34.965 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:34.965 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:34.965 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:34.965 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.965 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:35.222 [2024-07-26 18:30:01.160328] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.222 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:35.479 Malloc0 00:29:35.479 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:35.737 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:35.994 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.252 [2024-07-26 18:30:02.238829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.252 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:36.510 [2024-07-26 18:30:02.491530] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:36.510 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:36.768 [2024-07-26 18:30:02.760396] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:36.768 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1577903 00:29:36.768 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:36.768 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:36.768 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1577903 /var/tmp/bdevperf.sock 00:29:36.768 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1577903 ']' 00:29:36.768 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:36.768 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:36.768 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:36.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:36.768 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:36.768 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:37.026 18:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:37.026 18:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:37.026 18:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:37.284 NVMe0n1 00:29:37.285 18:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:37.850 00:29:37.850 18:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1578038 00:29:37.850 18:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:37.850 18:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:38.786 18:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.045 18:30:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:42.327 18:30:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:42.585 00:29:42.585 18:30:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:42.844 [2024-07-26 18:30:08.794182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.844 [2024-07-26 18:30:08.794576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.794997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 [2024-07-26 18:30:08.795548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ec250 is same with the state(5) to be set 00:29:42.845 18:30:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:46.132 18:30:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.132 [2024-07-26 18:30:12.043845] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.132 18:30:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:47.069 18:30:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:47.329 [2024-07-26 18:30:13.320681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.320999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.321011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.321023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.321035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.321046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.321066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.321095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.321107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.321120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.321132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 [2024-07-26 18:30:13.321144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecff0 is same with the state(5) to be set 00:29:47.329 18:30:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1578038 00:29:53.902 0 00:29:53.902 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1577903 00:29:53.902 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1577903 ']' 00:29:53.902 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1577903 00:29:53.902 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:53.902 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:53.902 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1577903 00:29:53.903 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:53.903 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:53.903 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1577903' 00:29:53.903 killing process with pid 1577903 00:29:53.903 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1577903 00:29:53.903 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1577903 00:29:53.903 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:53.903 [2024-07-26 18:30:02.824199] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:53.903 [2024-07-26 18:30:02.824279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577903 ] 00:29:53.903 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.903 [2024-07-26 18:30:02.856540] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:53.903 [2024-07-26 18:30:02.884737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.903 [2024-07-26 18:30:02.970972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.903 Running I/O for 15 seconds... 00:29:53.903 [2024-07-26 18:30:05.132540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.903 [2024-07-26 18:30:05.132603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.132632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.903 [2024-07-26 18:30:05.132648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.132665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.903 [2024-07-26 18:30:05.132679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.132695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.903 [2024-07-26 18:30:05.132708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.132724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.903 [2024-07-26 18:30:05.132738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.132753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.903 [2024-07-26 18:30:05.132767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.132782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.903 [2024-07-26 18:30:05.132796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.132811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.132825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.132841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.132855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.132870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.132884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.132900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.132924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.132941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.132954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.132970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.132983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.132998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.133012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.133043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.133056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.133078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.133108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.133124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.133137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.133153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.133167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.133182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.133196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.133211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.133224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.133240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.133253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.133268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.903 [2024-07-26 18:30:05.133282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.903 [2024-07-26 18:30:05.133297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.904 [2024-07-26 18:30:05.133310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.133972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.133986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.134000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.134014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.134027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.134074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.134091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.134108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.134122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.134137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.904 [2024-07-26 18:30:05.134150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.904 [2024-07-26 18:30:05.134165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.905 [2024-07-26 18:30:05.134950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.905 [2024-07-26 18:30:05.134965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.134977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.134992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.906 [2024-07-26 18:30:05.135807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.906 [2024-07-26 18:30:05.135821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.135834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.135849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.135862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.135876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.135889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.135907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.135921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.135935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.135948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.135963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.135976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.135991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.136005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.136032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.136083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.136114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.136143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.136172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.136201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.136229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.136257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.136288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.136317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.136346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.907 [2024-07-26 18:30:05.136374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd46cb0 is same with the state(5) to be set 00:29:53.907 [2024-07-26 18:30:05.136405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.907 [2024-07-26 18:30:05.136416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.907 [2024-07-26 18:30:05.136428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:29:53.907 [2024-07-26 18:30:05.136440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136497] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd46cb0 was disconnected and freed. reset controller. 00:29:53.907 [2024-07-26 18:30:05.136514] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:53.907 [2024-07-26 18:30:05.136547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.907 [2024-07-26 18:30:05.136564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.907 [2024-07-26 18:30:05.136592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.907 [2024-07-26 18:30:05.136618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.907 [2024-07-26 18:30:05.136644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.907 [2024-07-26 18:30:05.136657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.907 [2024-07-26 18:30:05.136719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd53850 (9): Bad file descriptor 00:29:53.907 [2024-07-26 18:30:05.139978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.907 [2024-07-26 18:30:05.264247] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:53.907 [2024-07-26 18:30:08.793713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.907 [2024-07-26 18:30:08.793785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.793804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.908 [2024-07-26 18:30:08.793818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.793832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.908 [2024-07-26 18:30:08.793846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.793859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.908 [2024-07-26 18:30:08.793872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.793885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd53850 is same with the state(5) to be set 00:29:53.908 [2024-07-26 18:30:08.795865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.795892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.795916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.795931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.795946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.795974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.795990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.908 [2024-07-26 18:30:08.796652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.908 [2024-07-26 18:30:08.796667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.796680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.796694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.796707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.796722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.796734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.796749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.796761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.796776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.796789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.796803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.796817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.796832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.796844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.796859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.796872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.796886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.796899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.796917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.796930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.796945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.796958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.796972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.796985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.797000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.797013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.797027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.797054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.797078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.797092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.797112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.797125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.797140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.797153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.797168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.797182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.797197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.797210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.797225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.797238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.909 [2024-07-26 18:30:08.797253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.909 [2024-07-26 18:30:08.797266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.910 [2024-07-26 18:30:08.797299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.910 [2024-07-26 18:30:08.797328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.910 [2024-07-26 18:30:08.797356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.910 [2024-07-26 18:30:08.797385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.910 [2024-07-26 18:30:08.797413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.910 [2024-07-26 18:30:08.797440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.910 [2024-07-26 18:30:08.797468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.797976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.797991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.798007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.798022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.798035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.798054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.910 [2024-07-26 18:30:08.798076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.910 [2024-07-26 18:30:08.798092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.911 [2024-07-26 18:30:08.798106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.911 [2024-07-26 18:30:08.798899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.911 [2024-07-26 18:30:08.798914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.912 [2024-07-26 18:30:08.798927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.798941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.912 [2024-07-26 18:30:08.798956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.798972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.912 [2024-07-26 18:30:08.798985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.912 [2024-07-26 18:30:08.799014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102736 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102744 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102752 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102760 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102768 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102776 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102784 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102792 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102800 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102808 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102816 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102824 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102832 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102840 len:8 PRP1 0x0 PRP2 0x0 00:29:53.912 [2024-07-26 18:30:08.799753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.912 [2024-07-26 18:30:08.799767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.912 [2024-07-26 18:30:08.799778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.912 [2024-07-26 18:30:08.799790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102256 len:8 PRP1 0x0 PRP2 0x0 00:29:53.913 [2024-07-26 18:30:08.799803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:08.799816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.913 [2024-07-26 18:30:08.799827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.913 [2024-07-26 18:30:08.799838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102264 len:8 PRP1 0x0 PRP2 0x0 00:29:53.913 [2024-07-26 18:30:08.799850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:08.799863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.913 [2024-07-26 18:30:08.799873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.913 [2024-07-26 18:30:08.799884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102272 len:8 PRP1 0x0 PRP2 0x0 00:29:53.913 [2024-07-26 18:30:08.799896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:08.799912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.913 [2024-07-26 18:30:08.799923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.913 [2024-07-26 18:30:08.799935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102280 len:8 PRP1 0x0 PRP2 0x0 00:29:53.913 [2024-07-26 18:30:08.799947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:08.799960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.913 [2024-07-26 18:30:08.799971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.913 [2024-07-26 18:30:08.799982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102288 len:8 PRP1 0x0 PRP2 0x0 00:29:53.913 [2024-07-26 18:30:08.799994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:08.800012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.913 [2024-07-26 18:30:08.800023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.913 [2024-07-26 18:30:08.800034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102296 len:8 PRP1 0x0 PRP2 0x0 00:29:53.913 [2024-07-26 18:30:08.800047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:08.800065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.913 [2024-07-26 18:30:08.800078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.913 [2024-07-26 18:30:08.800089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102304 len:8 PRP1 0x0 PRP2 0x0 00:29:53.913 [2024-07-26 18:30:08.800104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:08.800159] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd77670 was disconnected and freed. reset controller. 00:29:53.913 [2024-07-26 18:30:08.800177] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:53.913 [2024-07-26 18:30:08.800192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.913 [2024-07-26 18:30:08.803453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.913 [2024-07-26 18:30:08.803493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd53850 (9): Bad file descriptor 00:29:53.913 [2024-07-26 18:30:08.848599] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:53.913 [2024-07-26 18:30:13.321467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.913 [2024-07-26 18:30:13.321903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-26 18:30:13.321916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.321931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.321944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.321959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.321977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.321992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.914 [2024-07-26 18:30:13.322644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.914 [2024-07-26 18:30:13.322671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.914 [2024-07-26 18:30:13.322700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.914 [2024-07-26 18:30:13.322730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.914 [2024-07-26 18:30:13.322759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.914 [2024-07-26 18:30:13.322774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.914 [2024-07-26 18:30:13.322787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.322802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.915 [2024-07-26 18:30:13.322815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.322830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.915 [2024-07-26 18:30:13.322843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.322857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.915 [2024-07-26 18:30:13.322872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.322886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.915 [2024-07-26 18:30:13.322900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.322915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.915 [2024-07-26 18:30:13.322928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.322943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.915 [2024-07-26 18:30:13.322956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.322970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.915 [2024-07-26 18:30:13.322984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.322998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.915 [2024-07-26 18:30:13.323011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.915 [2024-07-26 18:30:13.323040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.915 [2024-07-26 18:30:13.323093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.915 [2024-07-26 18:30:13.323128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.915 [2024-07-26 18:30:13.323158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.915 [2024-07-26 18:30:13.323187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.915 [2024-07-26 18:30:13.323216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.915 [2024-07-26 18:30:13.323244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.915 [2024-07-26 18:30:13.323273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.915 [2024-07-26 18:30:13.323302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.915 [2024-07-26 18:30:13.323330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.915 [2024-07-26 18:30:13.323359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.915 [2024-07-26 18:30:13.323404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.915 [2024-07-26 18:30:13.323433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.915 [2024-07-26 18:30:13.323461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.915 [2024-07-26 18:30:13.323476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.915 [2024-07-26 18:30:13.323489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.323521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.323549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.323577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.323604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.323981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.323994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.324022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.916 [2024-07-26 18:30:13.324049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.324103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.324132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.324160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.324188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.324216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.324249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.324278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.324306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.324334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.324369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.916 [2024-07-26 18:30:13.324398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.916 [2024-07-26 18:30:13.324413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.324427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.324455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.324483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.324511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.324539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.917 [2024-07-26 18:30:13.324571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.917 [2024-07-26 18:30:13.324600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.917 [2024-07-26 18:30:13.324638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.917 [2024-07-26 18:30:13.324667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.917 [2024-07-26 18:30:13.324696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.917 [2024-07-26 18:30:13.324725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.917 [2024-07-26 18:30:13.324753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.917 [2024-07-26 18:30:13.324782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.324810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.324843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.324875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.324904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.324932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.324961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.324976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.324993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.325009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.325023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.325037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.325051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.325073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.325087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.325108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.325122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.325137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.325151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.325166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.325179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.325194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.325209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.325224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.325237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.325252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.917 [2024-07-26 18:30:13.325266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.325295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.917 [2024-07-26 18:30:13.325310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.917 [2024-07-26 18:30:13.325322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48688 len:8 PRP1 0x0 PRP2 0x0 00:29:53.917 [2024-07-26 18:30:13.325335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.917 [2024-07-26 18:30:13.325393] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd77330 was disconnected and freed. reset controller. 00:29:53.917 [2024-07-26 18:30:13.325411] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:53.917 [2024-07-26 18:30:13.325443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.918 [2024-07-26 18:30:13.325465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.918 [2024-07-26 18:30:13.325480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.918 [2024-07-26 18:30:13.325493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.918 [2024-07-26 18:30:13.325507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.918 [2024-07-26 18:30:13.325519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.918 [2024-07-26 18:30:13.325533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.918 [2024-07-26 18:30:13.325545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.918 [2024-07-26 18:30:13.325558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.918 [2024-07-26 18:30:13.328789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.918 [2024-07-26 18:30:13.328828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd53850 (9): Bad file descriptor 00:29:53.918 [2024-07-26 18:30:13.405997] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:53.918 00:29:53.918 Latency(us) 00:29:53.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.918 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:53.918 Verification LBA range: start 0x0 length 0x4000 00:29:53.918 NVMe0n1 : 15.01 8748.20 34.17 613.12 0.00 13644.87 819.20 17767.54 00:29:53.918 =================================================================================================================== 00:29:53.918 Total : 8748.20 34.17 613.12 0.00 13644.87 819.20 17767.54 00:29:53.918 Received shutdown signal, test time was about 15.000000 seconds 00:29:53.918 00:29:53.918 Latency(us) 00:29:53.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.918 =================================================================================================================== 00:29:53.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1580267 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1580267 /var/tmp/bdevperf.sock 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1580267 ']' 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:53.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:53.918 [2024-07-26 18:30:19.821327] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:53.918 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:54.197 [2024-07-26 18:30:20.098167] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:54.197 18:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:54.455 NVMe0n1 00:29:54.455 18:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:55.022 00:29:55.022 18:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:55.279 00:29:55.279 18:30:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:55.279 18:30:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:55.537 18:30:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:55.795 18:30:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:59.085 18:30:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:59.085 18:30:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:59.085 18:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1580928 00:29:59.085 18:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:59.085 18:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1580928 00:30:00.459 0 00:30:00.459 18:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:00.459 [2024-07-26 18:30:19.340422] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:00.459 [2024-07-26 18:30:19.340505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580267 ] 00:30:00.459 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.459 [2024-07-26 18:30:19.373503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:00.459 [2024-07-26 18:30:19.402424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.459 [2024-07-26 18:30:19.484933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.459 [2024-07-26 18:30:21.717190] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:00.459 [2024-07-26 18:30:21.717281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.459 [2024-07-26 18:30:21.717303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.459 [2024-07-26 18:30:21.717320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.459 [2024-07-26 18:30:21.717334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.459 [2024-07-26 18:30:21.717354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.459 [2024-07-26 18:30:21.717368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.459 [2024-07-26 18:30:21.717382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.459 [2024-07-26 18:30:21.717396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.459 [2024-07-26 18:30:21.717409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.460 [2024-07-26 18:30:21.717451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.460 [2024-07-26 18:30:21.717483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x809850 (9): Bad file descriptor 00:30:00.460 [2024-07-26 18:30:21.728334] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:00.460 Running I/O for 1 seconds... 00:30:00.460 00:30:00.460 Latency(us) 00:30:00.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.460 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:00.460 Verification LBA range: start 0x0 length 0x4000 00:30:00.460 NVMe0n1 : 1.01 8808.15 34.41 0.00 0.00 14467.13 3155.44 12913.02 00:30:00.460 =================================================================================================================== 00:30:00.460 Total : 8808.15 34.41 0.00 0.00 14467.13 3155.44 12913.02 00:30:00.460 18:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:00.460 18:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:00.460 18:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:00.718 18:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:00.718 18:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:00.975 18:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:01.233 18:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:04.521 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:04.521 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:04.521 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1580267 00:30:04.521 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1580267 ']' 00:30:04.521 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1580267 00:30:04.521 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:04.521 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:04.521 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1580267 00:30:04.521 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:04.521 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:04.521 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1580267' 00:30:04.521 killing process with pid 1580267 00:30:04.521 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1580267 00:30:04.521 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1580267 00:30:04.780 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:04.780 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.040 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:05.040 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:05.040 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:05.040 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:05.040 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:05.040 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:05.040 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:05.040 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:05.040 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:05.040 rmmod nvme_tcp 00:30:05.040 rmmod nvme_fabrics 00:30:05.040 rmmod nvme_keyring 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1577544 ']' 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1577544 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1577544 ']' 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1577544 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1577544 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1577544' 00:30:05.040 killing process with pid 1577544 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1577544 00:30:05.040 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1577544 00:30:05.299 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:05.299 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:05.299 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:05.299 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:05.299 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:05.299 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.299 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.299 18:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.833 18:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:07.833 00:30:07.833 real 0m34.946s 00:30:07.833 user 2m2.855s 00:30:07.833 sys 0m6.047s 00:30:07.833 18:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:07.833 18:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:07.833 ************************************ 00:30:07.833 END TEST nvmf_failover 00:30:07.833 ************************************ 00:30:07.833 18:30:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:07.833 18:30:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:07.833 18:30:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:07.833 18:30:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.833 ************************************ 00:30:07.833 START TEST nvmf_host_discovery 00:30:07.834 ************************************ 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:07.834 * Looking for test storage... 00:30:07.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:07.834 18:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:09.214 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:09.214 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:09.214 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:09.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:09.215 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.215 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:09.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:30:09.473 00:30:09.473 --- 10.0.0.2 ping statistics --- 00:30:09.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.473 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:30:09.473 00:30:09.473 --- 10.0.0.1 ping statistics --- 00:30:09.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.473 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1583603 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1583603 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1583603 ']' 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:09.473 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.473 [2024-07-26 18:30:35.498625] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:09.473 [2024-07-26 18:30:35.498700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.473 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.473 [2024-07-26 18:30:35.537307] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:09.473 [2024-07-26 18:30:35.564144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.731 [2024-07-26 18:30:35.649269] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.731 [2024-07-26 18:30:35.649319] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.732 [2024-07-26 18:30:35.649332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.732 [2024-07-26 18:30:35.649344] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.732 [2024-07-26 18:30:35.649365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.732 [2024-07-26 18:30:35.649389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.732 [2024-07-26 18:30:35.787356] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.732 [2024-07-26 18:30:35.795582] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.732 null0 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.732 null1 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1583675 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1583675 /tmp/host.sock 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1583675 ']' 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:09.732 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:09.732 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.732 [2024-07-26 18:30:35.870305] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:09.732 [2024-07-26 18:30:35.870404] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583675 ] 00:30:09.992 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.992 [2024-07-26 18:30:35.902710] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:09.992 [2024-07-26 18:30:35.930051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.992 [2024-07-26 18:30:36.015206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.992 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:09.992 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:09.992 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:09.992 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:09.992 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.992 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.992 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.992 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:09.992 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.992 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:10.251 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.511 [2024-07-26 18:30:36.417229] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:30:10.511 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:11.082 [2024-07-26 18:30:37.145251] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:11.082 [2024-07-26 18:30:37.145281] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:11.082 [2024-07-26 18:30:37.145307] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:11.342 [2024-07-26 18:30:37.231582] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:11.342 [2024-07-26 18:30:37.296520] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:11.342 [2024-07-26 18:30:37.296547] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:11.601 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:11.601 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:11.601 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:11.601 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:11.601 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:11.601 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.601 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:11.601 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.601 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:11.601 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.602 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.861 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:11.861 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:11.861 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:11.861 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:11.861 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:11.861 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.861 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.861 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.861 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:11.862 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:11.862 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:11.862 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:11.862 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:11.862 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:11.862 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:11.862 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.862 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:11.862 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.862 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:11.862 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:11.862 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.123 [2024-07-26 18:30:38.057952] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:12.123 [2024-07-26 18:30:38.058906] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:12.123 [2024-07-26 18:30:38.058960] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.123 [2024-07-26 18:30:38.186793] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:12.123 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:12.123 [2024-07-26 18:30:38.249402] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:12.123 [2024-07-26 18:30:38.249425] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:12.123 [2024-07-26 18:30:38.249435] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:13.082 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:13.082 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:13.082 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:13.082 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:13.082 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:13.082 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.082 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:13.082 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.082 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:13.082 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.341 [2024-07-26 18:30:39.298836] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:13.341 [2024-07-26 18:30:39.298877] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:13.341 [2024-07-26 18:30:39.306998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.341 [2024-07-26 18:30:39.307032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.341 [2024-07-26 18:30:39.307066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.341 [2024-07-26 18:30:39.307081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.341 [2024-07-26 18:30:39.307096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.341 [2024-07-26 18:30:39.307110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.341 [2024-07-26 18:30:39.307124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.341 [2024-07-26 18:30:39.307138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.341 [2024-07-26 18:30:39.307152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7046e0 is same with the state(5) to be set 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.341 [2024-07-26 18:30:39.316991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7046e0 (9): Bad file descriptor 00:30:13.341 [2024-07-26 18:30:39.327032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:13.341 [2024-07-26 18:30:39.327396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.341 [2024-07-26 18:30:39.327428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7046e0 with addr=10.0.0.2, port=4420 00:30:13.341 [2024-07-26 18:30:39.327445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7046e0 is same with the state(5) to be set 00:30:13.341 [2024-07-26 18:30:39.327469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7046e0 (9): Bad file descriptor 00:30:13.341 [2024-07-26 18:30:39.327511] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:13.341 [2024-07-26 18:30:39.327529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:13.341 [2024-07-26 18:30:39.327544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:13.341 [2024-07-26 18:30:39.327565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.341 [2024-07-26 18:30:39.337132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:13.341 [2024-07-26 18:30:39.337327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.341 [2024-07-26 18:30:39.337379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7046e0 with addr=10.0.0.2, port=4420 00:30:13.341 [2024-07-26 18:30:39.337396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7046e0 is same with the state(5) to be set 00:30:13.341 [2024-07-26 18:30:39.337429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7046e0 (9): Bad file descriptor 00:30:13.341 [2024-07-26 18:30:39.337449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:13.341 [2024-07-26 18:30:39.337463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:13.341 [2024-07-26 18:30:39.337487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:13.341 [2024-07-26 18:30:39.337521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:13.341 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:13.342 [2024-07-26 18:30:39.347206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:13.342 [2024-07-26 18:30:39.347596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.342 [2024-07-26 18:30:39.347641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7046e0 with addr=10.0.0.2, port=4420 00:30:13.342 [2024-07-26 18:30:39.347662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7046e0 is same with the state(5) to be set 00:30:13.342 [2024-07-26 18:30:39.347689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7046e0 (9): Bad file descriptor 00:30:13.342 [2024-07-26 18:30:39.347730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:13.342 [2024-07-26 18:30:39.347749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:13.342 [2024-07-26 18:30:39.347765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:13.342 [2024-07-26 18:30:39.347788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:13.342 [2024-07-26 18:30:39.357285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:13.342 [2024-07-26 18:30:39.357565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.342 [2024-07-26 18:30:39.357598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7046e0 with addr=10.0.0.2, port=4420 00:30:13.342 [2024-07-26 18:30:39.357617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7046e0 is same with the state(5) to be set 00:30:13.342 [2024-07-26 18:30:39.357642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7046e0 (9): Bad file descriptor 00:30:13.342 [2024-07-26 18:30:39.357679] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:13.342 [2024-07-26 18:30:39.357700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:13.342 [2024-07-26 18:30:39.357716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:13.342 [2024-07-26 18:30:39.357744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.342 [2024-07-26 18:30:39.367371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:13.342 [2024-07-26 18:30:39.367626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.342 [2024-07-26 18:30:39.367658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7046e0 with addr=10.0.0.2, port=4420 00:30:13.342 [2024-07-26 18:30:39.367676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7046e0 is same with the state(5) to be set 00:30:13.342 [2024-07-26 18:30:39.367701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7046e0 (9): Bad file descriptor 00:30:13.342 [2024-07-26 18:30:39.367738] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:13.342 [2024-07-26 18:30:39.367758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:13.342 [2024-07-26 18:30:39.367774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:13.342 [2024-07-26 18:30:39.367810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.342 [2024-07-26 18:30:39.377459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:13.342 [2024-07-26 18:30:39.377659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.342 [2024-07-26 18:30:39.377690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7046e0 with addr=10.0.0.2, port=4420 00:30:13.342 [2024-07-26 18:30:39.377708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7046e0 is same with the state(5) to be set 00:30:13.342 [2024-07-26 18:30:39.377733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7046e0 (9): Bad file descriptor 00:30:13.342 [2024-07-26 18:30:39.377756] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:13.342 [2024-07-26 18:30:39.377772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:13.342 [2024-07-26 18:30:39.377787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:13.342 [2024-07-26 18:30:39.377822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.342 [2024-07-26 18:30:39.386822] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:13.342 [2024-07-26 18:30:39.386855] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.342 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.601 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.536 [2024-07-26 18:30:40.675261] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:14.536 [2024-07-26 18:30:40.675305] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:14.536 [2024-07-26 18:30:40.675328] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:14.795 [2024-07-26 18:30:40.761588] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:15.055 [2024-07-26 18:30:41.030794] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:15.055 [2024-07-26 18:30:41.030842] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.055 request: 00:30:15.055 { 00:30:15.055 "name": "nvme", 00:30:15.055 "trtype": "tcp", 00:30:15.055 "traddr": "10.0.0.2", 00:30:15.055 "adrfam": "ipv4", 00:30:15.055 "trsvcid": "8009", 00:30:15.055 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:15.055 "wait_for_attach": true, 00:30:15.055 "method": "bdev_nvme_start_discovery", 00:30:15.055 "req_id": 1 00:30:15.055 } 00:30:15.055 Got JSON-RPC error response 00:30:15.055 response: 00:30:15.055 { 00:30:15.055 "code": -17, 00:30:15.055 "message": "File exists" 00:30:15.055 } 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.055 request: 00:30:15.055 { 00:30:15.055 "name": "nvme_second", 00:30:15.055 "trtype": "tcp", 00:30:15.055 "traddr": "10.0.0.2", 00:30:15.055 "adrfam": "ipv4", 00:30:15.055 "trsvcid": "8009", 00:30:15.055 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:15.055 "wait_for_attach": true, 00:30:15.055 "method": "bdev_nvme_start_discovery", 00:30:15.055 "req_id": 1 00:30:15.055 } 00:30:15.055 Got JSON-RPC error response 00:30:15.055 response: 00:30:15.055 { 00:30:15.055 "code": -17, 00:30:15.055 "message": "File exists" 00:30:15.055 } 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:15.055 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:15.315 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.315 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:15.315 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:15.315 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:15.315 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:15.315 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:15.315 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:15.315 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:15.315 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:15.315 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:15.315 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.315 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:16.254 [2024-07-26 18:30:42.238922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-26 18:30:42.238992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x742ec0 with addr=10.0.0.2, port=8010 00:30:16.254 [2024-07-26 18:30:42.239024] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:16.254 [2024-07-26 18:30:42.239040] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:16.254 [2024-07-26 18:30:42.239054] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:17.193 [2024-07-26 18:30:43.241340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.193 [2024-07-26 18:30:43.241423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x742ec0 with addr=10.0.0.2, port=8010 00:30:17.193 [2024-07-26 18:30:43.241455] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:17.193 [2024-07-26 18:30:43.241480] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:17.193 [2024-07-26 18:30:43.241494] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:18.128 [2024-07-26 18:30:44.243510] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:18.128 request: 00:30:18.128 { 00:30:18.128 "name": "nvme_second", 00:30:18.128 "trtype": "tcp", 00:30:18.128 "traddr": "10.0.0.2", 00:30:18.128 "adrfam": "ipv4", 00:30:18.128 "trsvcid": "8010", 00:30:18.128 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:18.128 "wait_for_attach": false, 00:30:18.128 "attach_timeout_ms": 3000, 00:30:18.128 "method": "bdev_nvme_start_discovery", 00:30:18.128 "req_id": 1 00:30:18.128 } 00:30:18.128 Got JSON-RPC error response 00:30:18.128 response: 00:30:18.128 { 00:30:18.128 "code": -110, 00:30:18.128 "message": "Connection timed out" 00:30:18.128 } 00:30:18.128 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:18.128 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:18.128 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:18.128 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:18.128 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:18.128 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:18.128 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:18.128 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:18.128 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.128 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.128 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:18.128 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:18.128 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1583675 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:18.386 rmmod nvme_tcp 00:30:18.386 rmmod nvme_fabrics 00:30:18.386 rmmod nvme_keyring 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1583603 ']' 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1583603 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1583603 ']' 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1583603 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1583603 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1583603' 00:30:18.386 killing process with pid 1583603 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1583603 00:30:18.386 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1583603 00:30:18.643 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:18.643 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:18.643 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:18.643 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:18.643 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:18.643 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.643 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.643 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.557 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:20.557 00:30:20.557 real 0m13.226s 00:30:20.557 user 0m19.415s 00:30:20.557 sys 0m2.745s 00:30:20.557 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:20.557 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.557 ************************************ 00:30:20.557 END TEST nvmf_host_discovery 00:30:20.557 ************************************ 00:30:20.557 18:30:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:20.557 18:30:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:20.557 18:30:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:20.557 18:30:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.557 ************************************ 00:30:20.557 START TEST nvmf_host_multipath_status 00:30:20.557 ************************************ 00:30:20.557 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:20.815 * Looking for test storage... 00:30:20.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:20.815 18:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:22.715 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:22.715 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:22.715 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:22.715 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:22.715 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:22.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:30:22.716 00:30:22.716 --- 10.0.0.2 ping statistics --- 00:30:22.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.716 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:22.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:30:22.716 00:30:22.716 --- 10.0.0.1 ping statistics --- 00:30:22.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.716 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1586705 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1586705 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1586705 ']' 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:22.716 18:30:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:22.716 [2024-07-26 18:30:48.851686] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:22.716 [2024-07-26 18:30:48.851773] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.973 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.973 [2024-07-26 18:30:48.888745] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:22.973 [2024-07-26 18:30:48.915374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:22.973 [2024-07-26 18:30:48.998430] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.973 [2024-07-26 18:30:48.998484] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.973 [2024-07-26 18:30:48.998508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.973 [2024-07-26 18:30:48.998519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.973 [2024-07-26 18:30:48.998529] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.973 [2024-07-26 18:30:48.998605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.973 [2024-07-26 18:30:48.998610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.973 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:22.973 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:22.973 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:22.973 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:22.973 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:23.232 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.232 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1586705 00:30:23.232 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:23.232 [2024-07-26 18:30:49.363791] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.491 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:23.750 Malloc0 00:30:23.750 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:24.008 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:24.266 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:24.524 [2024-07-26 18:30:50.521388] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.524 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:24.785 [2024-07-26 18:30:50.770008] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:24.785 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1586991 00:30:24.785 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:24.785 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:24.786 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1586991 /var/tmp/bdevperf.sock 00:30:24.786 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1586991 ']' 00:30:24.786 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:24.786 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:24.786 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:24.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:24.786 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:24.786 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:25.044 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:25.044 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:25.044 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:25.302 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:25.870 Nvme0n1 00:30:25.870 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:26.439 Nvme0n1 00:30:26.439 18:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:26.439 18:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:28.966 18:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:28.967 18:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:28.967 18:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:28.967 18:30:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:29.935 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:29.935 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:29.935 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.935 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:30.193 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.193 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:30.193 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.193 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:30.451 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:30.451 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:30.451 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.451 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:30.710 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.710 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:30.710 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.710 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:30.968 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.968 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:30.968 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.968 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:31.226 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.226 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:31.226 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.226 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:31.484 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.484 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:31.484 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:31.742 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:32.000 18:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:32.935 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:32.935 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:32.935 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:32.935 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:33.193 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:33.193 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:33.193 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.193 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:33.452 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.452 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:33.452 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.452 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:33.710 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.710 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:33.710 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.710 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:33.967 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.967 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:33.967 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.968 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:34.225 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.225 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:34.225 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.225 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:34.483 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.483 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:34.483 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:34.741 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:35.001 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:35.939 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:35.939 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:35.939 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:35.939 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:36.197 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.197 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:36.197 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.197 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:36.455 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:36.455 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:36.455 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.455 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:36.712 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.712 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:36.713 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.713 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:36.971 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.971 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:36.971 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.971 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:37.230 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.230 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:37.230 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.230 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:37.487 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.487 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:37.487 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:37.744 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:38.002 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:39.374 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:39.374 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:39.374 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.374 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:39.374 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.374 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:39.374 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.374 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:39.631 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:39.631 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:39.631 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.631 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:39.888 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.888 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:39.888 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.888 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:40.145 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.145 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:40.145 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.145 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:40.403 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.403 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:40.403 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.403 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:40.662 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:40.662 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:40.662 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:40.919 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:41.178 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:42.116 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:42.116 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:42.116 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.116 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:42.374 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:42.374 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:42.374 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.374 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:42.632 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:42.632 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:42.632 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.632 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:42.890 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:42.890 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:42.890 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.890 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:43.147 18:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.147 18:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:43.147 18:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.147 18:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:43.404 18:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:43.404 18:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:43.404 18:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.404 18:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:43.662 18:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:43.662 18:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:43.662 18:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:43.919 18:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:44.182 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:45.141 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:45.141 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:45.141 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.141 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:45.399 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:45.399 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:45.399 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.399 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:45.656 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.656 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:45.656 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.656 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:45.914 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.914 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:45.914 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.914 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:46.171 18:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.171 18:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:46.171 18:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.171 18:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:46.429 18:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:46.429 18:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:46.429 18:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.429 18:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:46.687 18:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.687 18:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:46.944 18:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:46.944 18:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:47.212 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:47.472 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:48.404 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:48.404 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:48.404 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.404 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:48.662 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.662 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:48.662 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.662 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:48.920 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.920 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:48.920 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.920 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:49.178 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.178 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:49.178 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.178 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:49.436 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.436 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:49.436 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.436 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:49.694 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.694 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:49.694 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.694 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:49.952 18:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.952 18:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:49.952 18:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:50.210 18:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:50.468 18:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:51.403 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:51.403 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:51.403 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.403 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:51.661 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:51.661 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:51.661 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.661 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:51.919 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.919 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:51.919 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.919 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:52.177 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.177 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:52.177 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.177 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:52.435 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.435 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:52.435 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.435 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:52.693 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.693 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:52.693 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.693 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:52.951 18:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.952 18:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:52.952 18:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:53.210 18:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:53.468 18:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:54.402 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:54.402 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:54.402 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.402 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:54.660 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.660 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:54.660 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.660 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:54.917 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.917 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:54.917 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.917 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:55.175 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.175 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:55.175 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.175 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:55.432 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.432 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:55.432 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.432 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:55.690 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.690 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:55.690 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.690 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:55.947 18:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.947 18:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:55.947 18:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:56.205 18:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:56.463 18:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:57.834 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:57.834 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:57.834 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.834 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:57.834 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.834 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:57.834 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.834 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:58.092 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:58.092 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:58.092 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.092 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:58.350 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.350 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:58.350 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.350 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:58.609 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.609 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:58.609 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.609 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:58.896 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.896 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:58.897 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.897 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:59.185 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:59.185 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1586991 00:30:59.185 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1586991 ']' 00:30:59.185 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1586991 00:30:59.185 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:59.185 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:59.185 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1586991 00:30:59.185 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:30:59.185 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:30:59.185 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1586991' 00:30:59.185 killing process with pid 1586991 00:30:59.185 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1586991 00:30:59.185 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1586991 00:30:59.185 Connection closed with partial response: 00:30:59.185 00:30:59.185 00:30:59.447 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1586991 00:30:59.447 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:59.447 [2024-07-26 18:30:50.835083] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:59.447 [2024-07-26 18:30:50.835176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586991 ] 00:30:59.447 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.447 [2024-07-26 18:30:50.866517] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:59.447 [2024-07-26 18:30:50.894798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.447 [2024-07-26 18:30:50.981419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:59.447 Running I/O for 90 seconds... 00:30:59.447 [2024-07-26 18:31:06.840675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.447 [2024-07-26 18:31:06.840746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:59.447 [2024-07-26 18:31:06.840805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.447 [2024-07-26 18:31:06.840841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:59.447 [2024-07-26 18:31:06.840868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.447 [2024-07-26 18:31:06.840886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:59.447 [2024-07-26 18:31:06.840909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.447 [2024-07-26 18:31:06.840926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:59.447 [2024-07-26 18:31:06.840949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.447 [2024-07-26 18:31:06.840966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:59.447 [2024-07-26 18:31:06.840989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.447 [2024-07-26 18:31:06.841006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:59.447 [2024-07-26 18:31:06.841031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.447 [2024-07-26 18:31:06.841049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:59.447 [2024-07-26 18:31:06.841080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.448 [2024-07-26 18:31:06.841954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.841993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.448 [2024-07-26 18:31:06.842787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:59.448 [2024-07-26 18:31:06.842812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.842828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.842852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.842872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.842897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.842914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.842937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.842953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.842979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.842996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.449 [2024-07-26 18:31:06.843348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.843966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.843992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.844008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.844038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.844055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.844120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.844138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.844164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.844181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.844208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.844225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.844251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.844268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.844294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.844310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.844337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.844354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.844396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.844413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.844438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.844454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.844480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.844496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.844522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.844539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.844564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.449 [2024-07-26 18:31:06.844581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:59.449 [2024-07-26 18:31:06.844608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.844628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.844656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.844672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.844698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.844714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.844740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.844757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.844782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.844798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.844824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.844841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.844866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.844883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.844908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.450 [2024-07-26 18:31:06.844925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.844951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.450 [2024-07-26 18:31:06.844967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.844992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.450 [2024-07-26 18:31:06.845008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.450 [2024-07-26 18:31:06.845050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.450 [2024-07-26 18:31:06.845305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.450 [2024-07-26 18:31:06.845355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.450 [2024-07-26 18:31:06.845417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.450 [2024-07-26 18:31:06.845460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.845516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.845560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.845603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.845647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.845690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.845733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.845777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.845820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.845864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.845906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.845955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.845981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.845998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.846025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.846041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.846092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.846112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.846141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.846159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.846194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.846213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.846241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.846259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.846286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.846303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.846332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.846349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.846392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.846409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.846437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.846453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.846480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.846497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.846528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.846545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:59.450 [2024-07-26 18:31:06.846573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.450 [2024-07-26 18:31:06.846589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.518779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.518841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.518921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.518957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.518980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.518997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.451 [2024-07-26 18:31:22.519880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.451 [2024-07-26 18:31:22.519921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.451 [2024-07-26 18:31:22.519958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.519980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.451 [2024-07-26 18:31:22.519996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.520647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.451 [2024-07-26 18:31:22.520673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.520701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.451 [2024-07-26 18:31:22.520720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.520758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.451 [2024-07-26 18:31:22.520777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.520799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.451 [2024-07-26 18:31:22.520816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.520837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.451 [2024-07-26 18:31:22.520853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.520875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.451 [2024-07-26 18:31:22.520896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.520937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.451 [2024-07-26 18:31:22.520955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.520983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.451 [2024-07-26 18:31:22.521001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.521024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.451 [2024-07-26 18:31:22.521041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:59.451 [2024-07-26 18:31:22.521072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.451 [2024-07-26 18:31:22.521090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.452 [2024-07-26 18:31:22.521382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.452 [2024-07-26 18:31:22.521434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.521963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.521991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.522008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.522030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.522046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.522091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.522111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.522135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.452 [2024-07-26 18:31:22.522152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.522175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.452 [2024-07-26 18:31:22.522192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.522714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.522739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.522767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.522785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.522808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.522825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.522848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.452 [2024-07-26 18:31:22.522866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:59.452 [2024-07-26 18:31:22.522888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.453 [2024-07-26 18:31:22.522905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.453 [2024-07-26 18:31:22.522928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.453 [2024-07-26 18:31:22.522944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:59.453 [2024-07-26 18:31:22.522967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.453 [2024-07-26 18:31:22.522983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:59.453 [2024-07-26 18:31:22.523022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.453 [2024-07-26 18:31:22.523044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:59.453 Received shutdown signal, test time was about 32.460644 seconds 00:30:59.453 00:30:59.453 Latency(us) 00:30:59.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.453 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:59.453 Verification LBA range: start 0x0 length 0x4000 00:30:59.453 Nvme0n1 : 32.46 7963.01 31.11 0.00 0.00 16047.99 344.37 4026531.84 00:30:59.453 =================================================================================================================== 00:30:59.453 Total : 7963.01 31.11 0.00 0.00 16047.99 344.37 4026531.84 00:30:59.453 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:59.711 rmmod nvme_tcp 00:30:59.711 rmmod nvme_fabrics 00:30:59.711 rmmod nvme_keyring 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1586705 ']' 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1586705 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1586705 ']' 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1586705 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1586705 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1586705' 00:30:59.711 killing process with pid 1586705 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1586705 00:30:59.711 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1586705 00:30:59.968 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:59.968 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:59.969 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:59.969 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:59.969 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:59.969 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.969 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.969 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.876 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:01.876 00:31:01.876 real 0m41.287s 00:31:01.876 user 2m4.743s 00:31:01.876 sys 0m10.537s 00:31:01.876 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:01.876 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:01.876 ************************************ 00:31:01.876 END TEST nvmf_host_multipath_status 00:31:01.876 ************************************ 00:31:01.876 18:31:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:01.876 18:31:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:01.876 18:31:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:01.876 18:31:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.134 ************************************ 00:31:02.134 START TEST nvmf_discovery_remove_ifc 00:31:02.134 ************************************ 00:31:02.134 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:02.134 * Looking for test storage... 00:31:02.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:02.134 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:02.134 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:02.134 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:02.134 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:02.134 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:02.134 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:02.134 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:02.134 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:02.134 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:02.134 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:02.134 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:02.134 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:02.135 18:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.040 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.040 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:04.040 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:04.040 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:04.040 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:04.040 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:04.040 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:04.040 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:04.040 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:04.040 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:04.040 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:04.040 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:04.041 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:04.041 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:04.041 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:04.041 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:04.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:31:04.041 00:31:04.041 --- 10.0.0.2 ping statistics --- 00:31:04.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.041 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:31:04.041 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:31:04.338 00:31:04.338 --- 10.0.0.1 ping statistics --- 00:31:04.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.338 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1593181 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1593181 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1593181 ']' 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:04.338 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.338 [2024-07-26 18:31:30.264004] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:31:04.338 [2024-07-26 18:31:30.264086] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.338 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.338 [2024-07-26 18:31:30.300215] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:04.338 [2024-07-26 18:31:30.332130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.338 [2024-07-26 18:31:30.422380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.338 [2024-07-26 18:31:30.422438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.338 [2024-07-26 18:31:30.422464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.338 [2024-07-26 18:31:30.422477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.338 [2024-07-26 18:31:30.422490] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.338 [2024-07-26 18:31:30.422525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.598 [2024-07-26 18:31:30.566498] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.598 [2024-07-26 18:31:30.574705] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:04.598 null0 00:31:04.598 [2024-07-26 18:31:30.606635] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1593201 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1593201 /tmp/host.sock 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1593201 ']' 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:04.598 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:04.598 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.598 [2024-07-26 18:31:30.667727] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:31:04.598 [2024-07-26 18:31:30.667817] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593201 ] 00:31:04.598 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.598 [2024-07-26 18:31:30.700260] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:04.598 [2024-07-26 18:31:30.729111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.857 [2024-07-26 18:31:30.819106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.857 18:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:06.235 [2024-07-26 18:31:32.055230] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:06.235 [2024-07-26 18:31:32.055255] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:06.235 [2024-07-26 18:31:32.055282] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:06.235 [2024-07-26 18:31:32.142601] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:06.235 [2024-07-26 18:31:32.206406] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:06.235 [2024-07-26 18:31:32.206474] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:06.235 [2024-07-26 18:31:32.206518] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:06.235 [2024-07-26 18:31:32.206543] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:06.235 [2024-07-26 18:31:32.206572] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:06.235 [2024-07-26 18:31:32.212962] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x76a370 was disconnected and freed. delete nvme_qpair. 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:06.235 18:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:07.608 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:07.608 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:07.608 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:07.608 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.608 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:07.608 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:07.608 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:07.608 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.608 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:07.608 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:08.546 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:08.546 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.546 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:08.546 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.546 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:08.546 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:08.546 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:08.546 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.546 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:08.546 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:09.482 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:09.482 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.482 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.482 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:09.482 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:09.482 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:09.482 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:09.482 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.482 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:09.482 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:10.417 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:10.417 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.417 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:10.417 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.417 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:10.417 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:10.417 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:10.417 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.417 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:10.417 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:11.795 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:11.795 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.795 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.795 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:11.795 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:11.795 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:11.795 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:11.795 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.795 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:11.795 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:11.795 [2024-07-26 18:31:37.647279] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:11.795 [2024-07-26 18:31:37.647344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.795 [2024-07-26 18:31:37.647383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.795 [2024-07-26 18:31:37.647403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.795 [2024-07-26 18:31:37.647420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.795 [2024-07-26 18:31:37.647437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.795 [2024-07-26 18:31:37.647453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.795 [2024-07-26 18:31:37.647471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.795 [2024-07-26 18:31:37.647488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.795 [2024-07-26 18:31:37.647506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.795 [2024-07-26 18:31:37.647522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.795 [2024-07-26 18:31:37.647539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x730d70 is same with the state(5) to be set 00:31:11.795 [2024-07-26 18:31:37.657298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x730d70 (9): Bad file descriptor 00:31:11.795 [2024-07-26 18:31:37.667356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:12.734 18:31:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:12.734 18:31:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.734 18:31:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:12.734 18:31:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.734 18:31:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:12.734 18:31:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:12.735 18:31:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:12.735 [2024-07-26 18:31:38.701105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:12.735 [2024-07-26 18:31:38.701181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x730d70 with addr=10.0.0.2, port=4420 00:31:12.735 [2024-07-26 18:31:38.701208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x730d70 is same with the state(5) to be set 00:31:12.735 [2024-07-26 18:31:38.701261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x730d70 (9): Bad file descriptor 00:31:12.735 [2024-07-26 18:31:38.701733] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:12.735 [2024-07-26 18:31:38.701776] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:12.735 [2024-07-26 18:31:38.701793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:12.735 [2024-07-26 18:31:38.701808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:12.735 [2024-07-26 18:31:38.701837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.735 [2024-07-26 18:31:38.701854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:12.735 18:31:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.735 18:31:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:12.735 18:31:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:13.672 [2024-07-26 18:31:39.704376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:13.672 [2024-07-26 18:31:39.704435] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:13.672 [2024-07-26 18:31:39.704453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:13.672 [2024-07-26 18:31:39.704470] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:13.672 [2024-07-26 18:31:39.704504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.672 [2024-07-26 18:31:39.704549] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:13.672 [2024-07-26 18:31:39.704599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.672 [2024-07-26 18:31:39.704624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.672 [2024-07-26 18:31:39.704648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.672 [2024-07-26 18:31:39.704666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.672 [2024-07-26 18:31:39.704682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.672 [2024-07-26 18:31:39.704698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.672 [2024-07-26 18:31:39.704715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.672 [2024-07-26 18:31:39.704733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.672 [2024-07-26 18:31:39.704752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.672 [2024-07-26 18:31:39.704768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.672 [2024-07-26 18:31:39.704796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:13.672 [2024-07-26 18:31:39.704899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x730210 (9): Bad file descriptor 00:31:13.672 [2024-07-26 18:31:39.705891] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:13.672 [2024-07-26 18:31:39.705915] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:13.672 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:13.672 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.672 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:13.672 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.672 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:13.672 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:13.672 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:13.672 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.672 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:13.673 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.673 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.673 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:13.673 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:13.673 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.673 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:13.673 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.673 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:13.673 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:13.673 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:13.932 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.932 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:13.932 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:14.865 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:14.865 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.865 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.865 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:14.865 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.865 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:14.865 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:14.865 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.865 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:14.865 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:15.799 [2024-07-26 18:31:41.761087] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:15.799 [2024-07-26 18:31:41.761135] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:15.799 [2024-07-26 18:31:41.761158] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:15.799 [2024-07-26 18:31:41.888584] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:15.799 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:15.799 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.799 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:15.799 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.799 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:15.799 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:15.799 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:15.799 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.058 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:16.058 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:16.058 [2024-07-26 18:31:41.951602] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:16.058 [2024-07-26 18:31:41.951658] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:16.058 [2024-07-26 18:31:41.951698] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:16.058 [2024-07-26 18:31:41.951723] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:16.058 [2024-07-26 18:31:41.951738] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:16.058 [2024-07-26 18:31:41.958867] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x773900 was disconnected and freed. delete nvme_qpair. 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1593201 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1593201 ']' 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1593201 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:16.991 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1593201 00:31:16.991 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:16.991 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:16.991 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1593201' 00:31:16.991 killing process with pid 1593201 00:31:16.991 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1593201 00:31:16.991 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1593201 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:17.250 rmmod nvme_tcp 00:31:17.250 rmmod nvme_fabrics 00:31:17.250 rmmod nvme_keyring 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1593181 ']' 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1593181 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1593181 ']' 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1593181 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1593181 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1593181' 00:31:17.250 killing process with pid 1593181 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1593181 00:31:17.250 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1593181 00:31:17.509 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:17.509 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:17.509 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:17.509 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:17.509 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:17.509 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.509 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.509 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.051 18:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:20.051 00:31:20.051 real 0m17.561s 00:31:20.051 user 0m25.403s 00:31:20.051 sys 0m2.996s 00:31:20.051 18:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:20.051 18:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:20.051 ************************************ 00:31:20.051 END TEST nvmf_discovery_remove_ifc 00:31:20.051 ************************************ 00:31:20.051 18:31:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:20.051 18:31:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:20.051 18:31:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:20.051 18:31:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.051 ************************************ 00:31:20.051 START TEST nvmf_identify_kernel_target 00:31:20.052 ************************************ 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:20.052 * Looking for test storage... 00:31:20.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:20.052 18:31:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:21.430 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:21.430 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:21.430 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:21.430 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.430 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:21.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:31:21.690 00:31:21.690 --- 10.0.0.2 ping statistics --- 00:31:21.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.690 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:31:21.690 00:31:21.690 --- 10.0.0.1 ping statistics --- 00:31:21.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.690 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:21.690 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:21.691 18:31:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:23.066 Waiting for block devices as requested 00:31:23.066 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:23.066 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:23.066 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:23.324 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:23.324 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:23.324 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:23.324 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:23.324 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:23.584 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:23.584 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:23.584 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:23.584 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:23.845 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:23.845 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:23.845 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:24.105 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:24.105 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:24.105 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:24.105 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:24.105 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:24.105 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:24.105 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:24.105 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:24.105 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:24.105 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:24.105 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:24.365 No valid GPT data, bailing 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:24.365 00:31:24.365 Discovery Log Number of Records 2, Generation counter 2 00:31:24.365 =====Discovery Log Entry 0====== 00:31:24.365 trtype: tcp 00:31:24.365 adrfam: ipv4 00:31:24.365 subtype: current discovery subsystem 00:31:24.365 treq: not specified, sq flow control disable supported 00:31:24.365 portid: 1 00:31:24.365 trsvcid: 4420 00:31:24.365 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:24.365 traddr: 10.0.0.1 00:31:24.365 eflags: none 00:31:24.365 sectype: none 00:31:24.365 =====Discovery Log Entry 1====== 00:31:24.365 trtype: tcp 00:31:24.365 adrfam: ipv4 00:31:24.365 subtype: nvme subsystem 00:31:24.365 treq: not specified, sq flow control disable supported 00:31:24.365 portid: 1 00:31:24.365 trsvcid: 4420 00:31:24.365 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:24.365 traddr: 10.0.0.1 00:31:24.365 eflags: none 00:31:24.365 sectype: none 00:31:24.365 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:24.365 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:24.365 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.365 ===================================================== 00:31:24.365 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:24.365 ===================================================== 00:31:24.365 Controller Capabilities/Features 00:31:24.365 ================================ 00:31:24.365 Vendor ID: 0000 00:31:24.365 Subsystem Vendor ID: 0000 00:31:24.365 Serial Number: 362e29a1ccce9a318380 00:31:24.365 Model Number: Linux 00:31:24.366 Firmware Version: 6.7.0-68 00:31:24.366 Recommended Arb Burst: 0 00:31:24.366 IEEE OUI Identifier: 00 00 00 00:31:24.366 Multi-path I/O 00:31:24.366 May have multiple subsystem ports: No 00:31:24.366 May have multiple controllers: No 00:31:24.366 Associated with SR-IOV VF: No 00:31:24.366 Max Data Transfer Size: Unlimited 00:31:24.366 Max Number of Namespaces: 0 00:31:24.366 Max Number of I/O Queues: 1024 00:31:24.366 NVMe Specification Version (VS): 1.3 00:31:24.366 NVMe Specification Version (Identify): 1.3 00:31:24.366 Maximum Queue Entries: 1024 00:31:24.366 Contiguous Queues Required: No 00:31:24.366 Arbitration Mechanisms Supported 00:31:24.366 Weighted Round Robin: Not Supported 00:31:24.366 Vendor Specific: Not Supported 00:31:24.366 Reset Timeout: 7500 ms 00:31:24.366 Doorbell Stride: 4 bytes 00:31:24.366 NVM Subsystem Reset: Not Supported 00:31:24.366 Command Sets Supported 00:31:24.366 NVM Command Set: Supported 00:31:24.366 Boot Partition: Not Supported 00:31:24.366 Memory Page Size Minimum: 4096 bytes 00:31:24.366 Memory Page Size Maximum: 4096 bytes 00:31:24.366 Persistent Memory Region: Not Supported 00:31:24.366 Optional Asynchronous Events Supported 00:31:24.366 Namespace Attribute Notices: Not Supported 00:31:24.366 Firmware Activation Notices: Not Supported 00:31:24.366 ANA Change Notices: Not Supported 00:31:24.366 PLE Aggregate Log Change Notices: Not Supported 00:31:24.366 LBA Status Info Alert Notices: Not Supported 00:31:24.366 EGE Aggregate Log Change Notices: Not Supported 00:31:24.366 Normal NVM Subsystem Shutdown event: Not Supported 00:31:24.366 Zone Descriptor Change Notices: Not Supported 00:31:24.366 Discovery Log Change Notices: Supported 00:31:24.366 Controller Attributes 00:31:24.366 128-bit Host Identifier: Not Supported 00:31:24.366 Non-Operational Permissive Mode: Not Supported 00:31:24.366 NVM Sets: Not Supported 00:31:24.366 Read Recovery Levels: Not Supported 00:31:24.366 Endurance Groups: Not Supported 00:31:24.366 Predictable Latency Mode: Not Supported 00:31:24.366 Traffic Based Keep ALive: Not Supported 00:31:24.366 Namespace Granularity: Not Supported 00:31:24.366 SQ Associations: Not Supported 00:31:24.366 UUID List: Not Supported 00:31:24.366 Multi-Domain Subsystem: Not Supported 00:31:24.366 Fixed Capacity Management: Not Supported 00:31:24.366 Variable Capacity Management: Not Supported 00:31:24.366 Delete Endurance Group: Not Supported 00:31:24.366 Delete NVM Set: Not Supported 00:31:24.366 Extended LBA Formats Supported: Not Supported 00:31:24.366 Flexible Data Placement Supported: Not Supported 00:31:24.366 00:31:24.366 Controller Memory Buffer Support 00:31:24.366 ================================ 00:31:24.366 Supported: No 00:31:24.366 00:31:24.366 Persistent Memory Region Support 00:31:24.366 ================================ 00:31:24.366 Supported: No 00:31:24.366 00:31:24.366 Admin Command Set Attributes 00:31:24.366 ============================ 00:31:24.366 Security Send/Receive: Not Supported 00:31:24.366 Format NVM: Not Supported 00:31:24.366 Firmware Activate/Download: Not Supported 00:31:24.366 Namespace Management: Not Supported 00:31:24.366 Device Self-Test: Not Supported 00:31:24.366 Directives: Not Supported 00:31:24.366 NVMe-MI: Not Supported 00:31:24.366 Virtualization Management: Not Supported 00:31:24.366 Doorbell Buffer Config: Not Supported 00:31:24.366 Get LBA Status Capability: Not Supported 00:31:24.366 Command & Feature Lockdown Capability: Not Supported 00:31:24.366 Abort Command Limit: 1 00:31:24.366 Async Event Request Limit: 1 00:31:24.366 Number of Firmware Slots: N/A 00:31:24.366 Firmware Slot 1 Read-Only: N/A 00:31:24.366 Firmware Activation Without Reset: N/A 00:31:24.366 Multiple Update Detection Support: N/A 00:31:24.366 Firmware Update Granularity: No Information Provided 00:31:24.366 Per-Namespace SMART Log: No 00:31:24.366 Asymmetric Namespace Access Log Page: Not Supported 00:31:24.366 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:24.366 Command Effects Log Page: Not Supported 00:31:24.366 Get Log Page Extended Data: Supported 00:31:24.366 Telemetry Log Pages: Not Supported 00:31:24.366 Persistent Event Log Pages: Not Supported 00:31:24.366 Supported Log Pages Log Page: May Support 00:31:24.366 Commands Supported & Effects Log Page: Not Supported 00:31:24.366 Feature Identifiers & Effects Log Page:May Support 00:31:24.366 NVMe-MI Commands & Effects Log Page: May Support 00:31:24.366 Data Area 4 for Telemetry Log: Not Supported 00:31:24.366 Error Log Page Entries Supported: 1 00:31:24.366 Keep Alive: Not Supported 00:31:24.366 00:31:24.366 NVM Command Set Attributes 00:31:24.366 ========================== 00:31:24.366 Submission Queue Entry Size 00:31:24.366 Max: 1 00:31:24.366 Min: 1 00:31:24.366 Completion Queue Entry Size 00:31:24.366 Max: 1 00:31:24.366 Min: 1 00:31:24.366 Number of Namespaces: 0 00:31:24.366 Compare Command: Not Supported 00:31:24.366 Write Uncorrectable Command: Not Supported 00:31:24.366 Dataset Management Command: Not Supported 00:31:24.366 Write Zeroes Command: Not Supported 00:31:24.366 Set Features Save Field: Not Supported 00:31:24.366 Reservations: Not Supported 00:31:24.366 Timestamp: Not Supported 00:31:24.366 Copy: Not Supported 00:31:24.366 Volatile Write Cache: Not Present 00:31:24.366 Atomic Write Unit (Normal): 1 00:31:24.366 Atomic Write Unit (PFail): 1 00:31:24.366 Atomic Compare & Write Unit: 1 00:31:24.366 Fused Compare & Write: Not Supported 00:31:24.366 Scatter-Gather List 00:31:24.366 SGL Command Set: Supported 00:31:24.366 SGL Keyed: Not Supported 00:31:24.366 SGL Bit Bucket Descriptor: Not Supported 00:31:24.366 SGL Metadata Pointer: Not Supported 00:31:24.366 Oversized SGL: Not Supported 00:31:24.366 SGL Metadata Address: Not Supported 00:31:24.366 SGL Offset: Supported 00:31:24.366 Transport SGL Data Block: Not Supported 00:31:24.366 Replay Protected Memory Block: Not Supported 00:31:24.366 00:31:24.366 Firmware Slot Information 00:31:24.366 ========================= 00:31:24.366 Active slot: 0 00:31:24.366 00:31:24.366 00:31:24.366 Error Log 00:31:24.366 ========= 00:31:24.366 00:31:24.366 Active Namespaces 00:31:24.366 ================= 00:31:24.366 Discovery Log Page 00:31:24.366 ================== 00:31:24.366 Generation Counter: 2 00:31:24.366 Number of Records: 2 00:31:24.366 Record Format: 0 00:31:24.366 00:31:24.366 Discovery Log Entry 0 00:31:24.366 ---------------------- 00:31:24.366 Transport Type: 3 (TCP) 00:31:24.366 Address Family: 1 (IPv4) 00:31:24.366 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:24.366 Entry Flags: 00:31:24.366 Duplicate Returned Information: 0 00:31:24.366 Explicit Persistent Connection Support for Discovery: 0 00:31:24.366 Transport Requirements: 00:31:24.366 Secure Channel: Not Specified 00:31:24.366 Port ID: 1 (0x0001) 00:31:24.366 Controller ID: 65535 (0xffff) 00:31:24.366 Admin Max SQ Size: 32 00:31:24.366 Transport Service Identifier: 4420 00:31:24.366 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:24.366 Transport Address: 10.0.0.1 00:31:24.366 Discovery Log Entry 1 00:31:24.366 ---------------------- 00:31:24.366 Transport Type: 3 (TCP) 00:31:24.366 Address Family: 1 (IPv4) 00:31:24.366 Subsystem Type: 2 (NVM Subsystem) 00:31:24.366 Entry Flags: 00:31:24.366 Duplicate Returned Information: 0 00:31:24.366 Explicit Persistent Connection Support for Discovery: 0 00:31:24.366 Transport Requirements: 00:31:24.366 Secure Channel: Not Specified 00:31:24.366 Port ID: 1 (0x0001) 00:31:24.366 Controller ID: 65535 (0xffff) 00:31:24.366 Admin Max SQ Size: 32 00:31:24.366 Transport Service Identifier: 4420 00:31:24.366 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:24.366 Transport Address: 10.0.0.1 00:31:24.366 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:24.626 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.626 get_feature(0x01) failed 00:31:24.626 get_feature(0x02) failed 00:31:24.626 get_feature(0x04) failed 00:31:24.626 ===================================================== 00:31:24.626 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:24.626 ===================================================== 00:31:24.626 Controller Capabilities/Features 00:31:24.626 ================================ 00:31:24.626 Vendor ID: 0000 00:31:24.626 Subsystem Vendor ID: 0000 00:31:24.626 Serial Number: 16717e0504d86590db2f 00:31:24.626 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:24.626 Firmware Version: 6.7.0-68 00:31:24.626 Recommended Arb Burst: 6 00:31:24.626 IEEE OUI Identifier: 00 00 00 00:31:24.626 Multi-path I/O 00:31:24.626 May have multiple subsystem ports: Yes 00:31:24.626 May have multiple controllers: Yes 00:31:24.626 Associated with SR-IOV VF: No 00:31:24.626 Max Data Transfer Size: Unlimited 00:31:24.626 Max Number of Namespaces: 1024 00:31:24.626 Max Number of I/O Queues: 128 00:31:24.626 NVMe Specification Version (VS): 1.3 00:31:24.626 NVMe Specification Version (Identify): 1.3 00:31:24.626 Maximum Queue Entries: 1024 00:31:24.626 Contiguous Queues Required: No 00:31:24.626 Arbitration Mechanisms Supported 00:31:24.626 Weighted Round Robin: Not Supported 00:31:24.626 Vendor Specific: Not Supported 00:31:24.626 Reset Timeout: 7500 ms 00:31:24.626 Doorbell Stride: 4 bytes 00:31:24.626 NVM Subsystem Reset: Not Supported 00:31:24.626 Command Sets Supported 00:31:24.626 NVM Command Set: Supported 00:31:24.626 Boot Partition: Not Supported 00:31:24.626 Memory Page Size Minimum: 4096 bytes 00:31:24.626 Memory Page Size Maximum: 4096 bytes 00:31:24.626 Persistent Memory Region: Not Supported 00:31:24.626 Optional Asynchronous Events Supported 00:31:24.626 Namespace Attribute Notices: Supported 00:31:24.626 Firmware Activation Notices: Not Supported 00:31:24.626 ANA Change Notices: Supported 00:31:24.626 PLE Aggregate Log Change Notices: Not Supported 00:31:24.626 LBA Status Info Alert Notices: Not Supported 00:31:24.626 EGE Aggregate Log Change Notices: Not Supported 00:31:24.626 Normal NVM Subsystem Shutdown event: Not Supported 00:31:24.626 Zone Descriptor Change Notices: Not Supported 00:31:24.626 Discovery Log Change Notices: Not Supported 00:31:24.626 Controller Attributes 00:31:24.626 128-bit Host Identifier: Supported 00:31:24.626 Non-Operational Permissive Mode: Not Supported 00:31:24.626 NVM Sets: Not Supported 00:31:24.626 Read Recovery Levels: Not Supported 00:31:24.626 Endurance Groups: Not Supported 00:31:24.626 Predictable Latency Mode: Not Supported 00:31:24.626 Traffic Based Keep ALive: Supported 00:31:24.626 Namespace Granularity: Not Supported 00:31:24.626 SQ Associations: Not Supported 00:31:24.626 UUID List: Not Supported 00:31:24.626 Multi-Domain Subsystem: Not Supported 00:31:24.626 Fixed Capacity Management: Not Supported 00:31:24.626 Variable Capacity Management: Not Supported 00:31:24.626 Delete Endurance Group: Not Supported 00:31:24.626 Delete NVM Set: Not Supported 00:31:24.626 Extended LBA Formats Supported: Not Supported 00:31:24.626 Flexible Data Placement Supported: Not Supported 00:31:24.626 00:31:24.626 Controller Memory Buffer Support 00:31:24.626 ================================ 00:31:24.626 Supported: No 00:31:24.626 00:31:24.626 Persistent Memory Region Support 00:31:24.626 ================================ 00:31:24.626 Supported: No 00:31:24.626 00:31:24.626 Admin Command Set Attributes 00:31:24.626 ============================ 00:31:24.626 Security Send/Receive: Not Supported 00:31:24.626 Format NVM: Not Supported 00:31:24.626 Firmware Activate/Download: Not Supported 00:31:24.626 Namespace Management: Not Supported 00:31:24.626 Device Self-Test: Not Supported 00:31:24.626 Directives: Not Supported 00:31:24.626 NVMe-MI: Not Supported 00:31:24.626 Virtualization Management: Not Supported 00:31:24.626 Doorbell Buffer Config: Not Supported 00:31:24.626 Get LBA Status Capability: Not Supported 00:31:24.626 Command & Feature Lockdown Capability: Not Supported 00:31:24.626 Abort Command Limit: 4 00:31:24.626 Async Event Request Limit: 4 00:31:24.626 Number of Firmware Slots: N/A 00:31:24.626 Firmware Slot 1 Read-Only: N/A 00:31:24.626 Firmware Activation Without Reset: N/A 00:31:24.626 Multiple Update Detection Support: N/A 00:31:24.626 Firmware Update Granularity: No Information Provided 00:31:24.626 Per-Namespace SMART Log: Yes 00:31:24.626 Asymmetric Namespace Access Log Page: Supported 00:31:24.626 ANA Transition Time : 10 sec 00:31:24.626 00:31:24.626 Asymmetric Namespace Access Capabilities 00:31:24.626 ANA Optimized State : Supported 00:31:24.626 ANA Non-Optimized State : Supported 00:31:24.626 ANA Inaccessible State : Supported 00:31:24.626 ANA Persistent Loss State : Supported 00:31:24.626 ANA Change State : Supported 00:31:24.626 ANAGRPID is not changed : No 00:31:24.626 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:24.626 00:31:24.626 ANA Group Identifier Maximum : 128 00:31:24.626 Number of ANA Group Identifiers : 128 00:31:24.626 Max Number of Allowed Namespaces : 1024 00:31:24.626 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:24.626 Command Effects Log Page: Supported 00:31:24.627 Get Log Page Extended Data: Supported 00:31:24.627 Telemetry Log Pages: Not Supported 00:31:24.627 Persistent Event Log Pages: Not Supported 00:31:24.627 Supported Log Pages Log Page: May Support 00:31:24.627 Commands Supported & Effects Log Page: Not Supported 00:31:24.627 Feature Identifiers & Effects Log Page:May Support 00:31:24.627 NVMe-MI Commands & Effects Log Page: May Support 00:31:24.627 Data Area 4 for Telemetry Log: Not Supported 00:31:24.627 Error Log Page Entries Supported: 128 00:31:24.627 Keep Alive: Supported 00:31:24.627 Keep Alive Granularity: 1000 ms 00:31:24.627 00:31:24.627 NVM Command Set Attributes 00:31:24.627 ========================== 00:31:24.627 Submission Queue Entry Size 00:31:24.627 Max: 64 00:31:24.627 Min: 64 00:31:24.627 Completion Queue Entry Size 00:31:24.627 Max: 16 00:31:24.627 Min: 16 00:31:24.627 Number of Namespaces: 1024 00:31:24.627 Compare Command: Not Supported 00:31:24.627 Write Uncorrectable Command: Not Supported 00:31:24.627 Dataset Management Command: Supported 00:31:24.627 Write Zeroes Command: Supported 00:31:24.627 Set Features Save Field: Not Supported 00:31:24.627 Reservations: Not Supported 00:31:24.627 Timestamp: Not Supported 00:31:24.627 Copy: Not Supported 00:31:24.627 Volatile Write Cache: Present 00:31:24.627 Atomic Write Unit (Normal): 1 00:31:24.627 Atomic Write Unit (PFail): 1 00:31:24.627 Atomic Compare & Write Unit: 1 00:31:24.627 Fused Compare & Write: Not Supported 00:31:24.627 Scatter-Gather List 00:31:24.627 SGL Command Set: Supported 00:31:24.627 SGL Keyed: Not Supported 00:31:24.627 SGL Bit Bucket Descriptor: Not Supported 00:31:24.627 SGL Metadata Pointer: Not Supported 00:31:24.627 Oversized SGL: Not Supported 00:31:24.627 SGL Metadata Address: Not Supported 00:31:24.627 SGL Offset: Supported 00:31:24.627 Transport SGL Data Block: Not Supported 00:31:24.627 Replay Protected Memory Block: Not Supported 00:31:24.627 00:31:24.627 Firmware Slot Information 00:31:24.627 ========================= 00:31:24.627 Active slot: 0 00:31:24.627 00:31:24.627 Asymmetric Namespace Access 00:31:24.627 =========================== 00:31:24.627 Change Count : 0 00:31:24.627 Number of ANA Group Descriptors : 1 00:31:24.627 ANA Group Descriptor : 0 00:31:24.627 ANA Group ID : 1 00:31:24.627 Number of NSID Values : 1 00:31:24.627 Change Count : 0 00:31:24.627 ANA State : 1 00:31:24.627 Namespace Identifier : 1 00:31:24.627 00:31:24.627 Commands Supported and Effects 00:31:24.627 ============================== 00:31:24.627 Admin Commands 00:31:24.627 -------------- 00:31:24.627 Get Log Page (02h): Supported 00:31:24.627 Identify (06h): Supported 00:31:24.627 Abort (08h): Supported 00:31:24.627 Set Features (09h): Supported 00:31:24.627 Get Features (0Ah): Supported 00:31:24.627 Asynchronous Event Request (0Ch): Supported 00:31:24.627 Keep Alive (18h): Supported 00:31:24.627 I/O Commands 00:31:24.627 ------------ 00:31:24.627 Flush (00h): Supported 00:31:24.627 Write (01h): Supported LBA-Change 00:31:24.627 Read (02h): Supported 00:31:24.627 Write Zeroes (08h): Supported LBA-Change 00:31:24.627 Dataset Management (09h): Supported 00:31:24.627 00:31:24.627 Error Log 00:31:24.627 ========= 00:31:24.627 Entry: 0 00:31:24.627 Error Count: 0x3 00:31:24.627 Submission Queue Id: 0x0 00:31:24.627 Command Id: 0x5 00:31:24.627 Phase Bit: 0 00:31:24.627 Status Code: 0x2 00:31:24.627 Status Code Type: 0x0 00:31:24.627 Do Not Retry: 1 00:31:24.627 Error Location: 0x28 00:31:24.627 LBA: 0x0 00:31:24.627 Namespace: 0x0 00:31:24.627 Vendor Log Page: 0x0 00:31:24.627 ----------- 00:31:24.627 Entry: 1 00:31:24.627 Error Count: 0x2 00:31:24.627 Submission Queue Id: 0x0 00:31:24.627 Command Id: 0x5 00:31:24.627 Phase Bit: 0 00:31:24.627 Status Code: 0x2 00:31:24.627 Status Code Type: 0x0 00:31:24.627 Do Not Retry: 1 00:31:24.627 Error Location: 0x28 00:31:24.627 LBA: 0x0 00:31:24.627 Namespace: 0x0 00:31:24.627 Vendor Log Page: 0x0 00:31:24.627 ----------- 00:31:24.627 Entry: 2 00:31:24.627 Error Count: 0x1 00:31:24.627 Submission Queue Id: 0x0 00:31:24.627 Command Id: 0x4 00:31:24.627 Phase Bit: 0 00:31:24.627 Status Code: 0x2 00:31:24.627 Status Code Type: 0x0 00:31:24.627 Do Not Retry: 1 00:31:24.627 Error Location: 0x28 00:31:24.627 LBA: 0x0 00:31:24.627 Namespace: 0x0 00:31:24.627 Vendor Log Page: 0x0 00:31:24.627 00:31:24.627 Number of Queues 00:31:24.627 ================ 00:31:24.627 Number of I/O Submission Queues: 128 00:31:24.627 Number of I/O Completion Queues: 128 00:31:24.627 00:31:24.627 ZNS Specific Controller Data 00:31:24.627 ============================ 00:31:24.627 Zone Append Size Limit: 0 00:31:24.627 00:31:24.627 00:31:24.627 Active Namespaces 00:31:24.627 ================= 00:31:24.627 get_feature(0x05) failed 00:31:24.627 Namespace ID:1 00:31:24.627 Command Set Identifier: NVM (00h) 00:31:24.627 Deallocate: Supported 00:31:24.627 Deallocated/Unwritten Error: Not Supported 00:31:24.627 Deallocated Read Value: Unknown 00:31:24.627 Deallocate in Write Zeroes: Not Supported 00:31:24.627 Deallocated Guard Field: 0xFFFF 00:31:24.627 Flush: Supported 00:31:24.627 Reservation: Not Supported 00:31:24.627 Namespace Sharing Capabilities: Multiple Controllers 00:31:24.627 Size (in LBAs): 1953525168 (931GiB) 00:31:24.627 Capacity (in LBAs): 1953525168 (931GiB) 00:31:24.627 Utilization (in LBAs): 1953525168 (931GiB) 00:31:24.627 UUID: 3bfedac2-5a4c-4074-8aae-4a16e19e5694 00:31:24.627 Thin Provisioning: Not Supported 00:31:24.627 Per-NS Atomic Units: Yes 00:31:24.627 Atomic Boundary Size (Normal): 0 00:31:24.627 Atomic Boundary Size (PFail): 0 00:31:24.627 Atomic Boundary Offset: 0 00:31:24.627 NGUID/EUI64 Never Reused: No 00:31:24.627 ANA group ID: 1 00:31:24.627 Namespace Write Protected: No 00:31:24.627 Number of LBA Formats: 1 00:31:24.627 Current LBA Format: LBA Format #00 00:31:24.627 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:24.627 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:24.627 rmmod nvme_tcp 00:31:24.627 rmmod nvme_fabrics 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.627 18:31:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.537 18:31:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:26.537 18:31:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:26.537 18:31:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:26.537 18:31:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:26.796 18:31:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:26.796 18:31:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:26.796 18:31:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:26.796 18:31:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:26.796 18:31:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:26.796 18:31:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:26.796 18:31:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:27.728 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:27.728 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:27.987 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:27.987 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:27.987 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:27.987 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:27.987 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:27.987 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:27.987 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:27.987 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:27.987 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:27.987 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:27.987 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:27.987 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:27.987 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:27.987 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:28.925 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:28.925 00:31:28.925 real 0m9.389s 00:31:28.925 user 0m1.943s 00:31:28.925 sys 0m3.349s 00:31:28.925 18:31:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:28.925 18:31:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:28.925 ************************************ 00:31:28.925 END TEST nvmf_identify_kernel_target 00:31:28.925 ************************************ 00:31:28.925 18:31:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:28.925 18:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:28.925 18:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:28.925 18:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.925 ************************************ 00:31:28.925 START TEST nvmf_auth_host 00:31:28.925 ************************************ 00:31:28.925 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:29.184 * Looking for test storage... 00:31:29.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:29.184 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:29.185 18:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:31.093 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:31.093 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:31.093 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:31.094 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:31.094 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:31.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:31.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:31:31.094 00:31:31.094 --- 10.0.0.2 ping statistics --- 00:31:31.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.094 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:31.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:31.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:31:31.094 00:31:31.094 --- 10.0.0.1 ping statistics --- 00:31:31.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.094 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1600370 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1600370 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1600370 ']' 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:31.094 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.400 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:31.400 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:31.400 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:31.400 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:31.400 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=15b1d36b1ad4c2910e3169580154d988 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dHU 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 15b1d36b1ad4c2910e3169580154d988 0 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 15b1d36b1ad4c2910e3169580154d988 0 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=15b1d36b1ad4c2910e3169580154d988 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dHU 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dHU 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.dHU 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d9de8195d02b26969e25a8673eea4ec83ac34231c97981c03d46af0ed6396f63 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.e0L 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d9de8195d02b26969e25a8673eea4ec83ac34231c97981c03d46af0ed6396f63 3 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d9de8195d02b26969e25a8673eea4ec83ac34231c97981c03d46af0ed6396f63 3 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d9de8195d02b26969e25a8673eea4ec83ac34231c97981c03d46af0ed6396f63 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.e0L 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.e0L 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.e0L 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=08c5aa5da89154d7cd3b41c41f59c81e4c9909415b87f8c6 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9Wv 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 08c5aa5da89154d7cd3b41c41f59c81e4c9909415b87f8c6 0 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 08c5aa5da89154d7cd3b41c41f59c81e4c9909415b87f8c6 0 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=08c5aa5da89154d7cd3b41c41f59c81e4c9909415b87f8c6 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9Wv 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9Wv 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9Wv 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=292f060b3b2a9aca0a0185497e42d74473d10aaf503d9260 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.bB7 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 292f060b3b2a9aca0a0185497e42d74473d10aaf503d9260 2 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 292f060b3b2a9aca0a0185497e42d74473d10aaf503d9260 2 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=292f060b3b2a9aca0a0185497e42d74473d10aaf503d9260 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.bB7 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.bB7 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.bB7 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0530e3f2ad48ae94a0031c25c964302c 00:31:31.683 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.P9v 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0530e3f2ad48ae94a0031c25c964302c 1 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0530e3f2ad48ae94a0031c25c964302c 1 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0530e3f2ad48ae94a0031c25c964302c 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.P9v 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.P9v 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.P9v 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:31.684 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a7494a470e8d88b79f26247b99c9455a 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BP2 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a7494a470e8d88b79f26247b99c9455a 1 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a7494a470e8d88b79f26247b99c9455a 1 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a7494a470e8d88b79f26247b99c9455a 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BP2 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BP2 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.BP2 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cf2a62613b16841842a437e375d0e3d116c9ee037a47bc22 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zEM 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cf2a62613b16841842a437e375d0e3d116c9ee037a47bc22 2 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cf2a62613b16841842a437e375d0e3d116c9ee037a47bc22 2 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cf2a62613b16841842a437e375d0e3d116c9ee037a47bc22 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zEM 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zEM 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.zEM 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:31.942 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=28adedd4e6eda4c558f11cc9c4cc0eb1 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DVx 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 28adedd4e6eda4c558f11cc9c4cc0eb1 0 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 28adedd4e6eda4c558f11cc9c4cc0eb1 0 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=28adedd4e6eda4c558f11cc9c4cc0eb1 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DVx 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DVx 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.DVx 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:31.943 18:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f5b4d6074c7fc4011da4b77fd54bd1e80c26e067a87aea8aee491595f20c15b6 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zKe 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f5b4d6074c7fc4011da4b77fd54bd1e80c26e067a87aea8aee491595f20c15b6 3 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f5b4d6074c7fc4011da4b77fd54bd1e80c26e067a87aea8aee491595f20c15b6 3 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f5b4d6074c7fc4011da4b77fd54bd1e80c26e067a87aea8aee491595f20c15b6 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zKe 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zKe 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.zKe 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1600370 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1600370 ']' 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:31.943 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.510 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:32.510 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:32.510 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:32.510 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dHU 00:31:32.510 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.510 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.510 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.510 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.e0L ]] 00:31:32.510 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.e0L 00:31:32.510 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.510 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9Wv 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.bB7 ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bB7 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.P9v 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.BP2 ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BP2 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zEM 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.DVx ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.DVx 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.zKe 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:32.511 18:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:33.446 Waiting for block devices as requested 00:31:33.446 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:33.446 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:33.703 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:33.703 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:33.961 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:33.961 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:33.961 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:33.961 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:34.220 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:34.220 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:34.220 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:34.220 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:34.478 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:34.479 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:34.479 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:34.479 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:34.737 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:35.305 No valid GPT data, bailing 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:35.305 00:31:35.305 Discovery Log Number of Records 2, Generation counter 2 00:31:35.305 =====Discovery Log Entry 0====== 00:31:35.305 trtype: tcp 00:31:35.305 adrfam: ipv4 00:31:35.305 subtype: current discovery subsystem 00:31:35.305 treq: not specified, sq flow control disable supported 00:31:35.305 portid: 1 00:31:35.305 trsvcid: 4420 00:31:35.305 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:35.305 traddr: 10.0.0.1 00:31:35.305 eflags: none 00:31:35.305 sectype: none 00:31:35.305 =====Discovery Log Entry 1====== 00:31:35.305 trtype: tcp 00:31:35.305 adrfam: ipv4 00:31:35.305 subtype: nvme subsystem 00:31:35.305 treq: not specified, sq flow control disable supported 00:31:35.305 portid: 1 00:31:35.305 trsvcid: 4420 00:31:35.305 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:35.305 traddr: 10.0.0.1 00:31:35.305 eflags: none 00:31:35.305 sectype: none 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.305 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.563 nvme0n1 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.563 nvme0n1 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.563 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.824 nvme0n1 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:35.824 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.082 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.082 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.082 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.083 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.083 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.083 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.083 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.083 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.083 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.083 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.083 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.083 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.083 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.083 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:36.083 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.083 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.083 nvme0n1 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.083 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.343 nvme0n1 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.343 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.344 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.344 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.344 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.344 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.344 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.344 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.344 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:36.344 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.344 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.603 nvme0n1 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.603 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.861 nvme0n1 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.861 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.862 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.122 nvme0n1 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.122 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.382 nvme0n1 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.382 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.383 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.643 nvme0n1 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.643 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.903 nvme0n1 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.903 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.163 nvme0n1 00:31:38.163 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.163 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.163 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.163 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.163 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.163 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:38.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:38.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:38.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:38.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.424 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.684 nvme0n1 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.684 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.945 nvme0n1 00:31:38.945 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.945 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.945 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.945 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.945 18:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.945 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.516 nvme0n1 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.516 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.776 nvme0n1 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.777 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.347 nvme0n1 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.347 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.917 nvme0n1 00:31:40.917 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.917 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.917 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.917 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.917 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.917 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.917 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.917 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.917 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.917 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.486 nvme0n1 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.486 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.746 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.315 nvme0n1 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.315 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.316 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.316 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.316 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.316 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.316 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.316 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:42.316 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.316 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.884 nvme0n1 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.884 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.852 nvme0n1 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.852 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.790 nvme0n1 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.790 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.049 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.987 nvme0n1 00:31:45.987 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.987 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.987 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.987 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.987 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.988 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.928 nvme0n1 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.928 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.866 nvme0n1 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.866 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.126 nvme0n1 00:31:48.126 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.126 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.126 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.126 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.126 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.126 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.126 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.126 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.126 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.126 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.126 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.126 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.127 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.386 nvme0n1 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.386 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.645 nvme0n1 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.645 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.903 nvme0n1 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.903 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.904 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.904 nvme0n1 00:31:48.904 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.904 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.904 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.904 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.904 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.904 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.904 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.904 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.904 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.904 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.162 nvme0n1 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.162 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.420 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.421 nvme0n1 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.421 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.679 nvme0n1 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.679 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:49.937 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.938 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.938 nvme0n1 00:31:49.938 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.938 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.938 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.938 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.938 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.938 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.196 nvme0n1 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.196 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.457 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.717 nvme0n1 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.978 nvme0n1 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.978 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.237 nvme0n1 00:31:51.237 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.237 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.237 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.237 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.237 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.237 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.497 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.756 nvme0n1 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.756 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.757 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.017 nvme0n1 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.017 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.586 nvme0n1 00:31:52.586 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.586 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.586 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.586 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.586 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.586 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.844 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.415 nvme0n1 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:53.415 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.416 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.985 nvme0n1 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:53.985 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.986 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.552 nvme0n1 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:54.552 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.553 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.119 nvme0n1 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.119 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.053 nvme0n1 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.053 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.432 nvme0n1 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.432 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.367 nvme0n1 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:58.367 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.368 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.305 nvme0n1 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.305 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.243 nvme0n1 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.243 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.501 nvme0n1 00:32:00.501 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.501 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.501 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.501 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.501 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.501 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.501 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.501 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.501 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.501 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.501 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.501 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.501 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.502 nvme0n1 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.502 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.759 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.759 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.759 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.759 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.759 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.759 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.759 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.760 nvme0n1 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.760 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:01.049 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.050 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.050 nvme0n1 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.050 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.310 nvme0n1 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.310 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.571 nvme0n1 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.571 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.831 nvme0n1 00:32:01.831 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.831 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.831 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.831 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.832 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.091 nvme0n1 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.091 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.092 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:02.092 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.092 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.352 nvme0n1 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.352 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.613 nvme0n1 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.613 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.182 nvme0n1 00:32:03.182 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.182 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.182 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.182 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.182 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.182 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.183 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.443 nvme0n1 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.443 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.703 nvme0n1 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:03.703 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.704 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.962 nvme0n1 00:32:03.962 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.962 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.962 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.962 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.962 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.222 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.481 nvme0n1 00:32:04.481 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.481 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.481 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.481 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.481 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.481 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.481 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.482 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.052 nvme0n1 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.052 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.621 nvme0n1 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.621 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.880 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.446 nvme0n1 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.446 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.014 nvme0n1 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.014 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.595 nvme0n1 00:32:07.595 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.595 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.595 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.595 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.595 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.595 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.595 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.595 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTViMWQzNmIxYWQ0YzI5MTBlMzE2OTU4MDE1NGQ5ODiBtT2B: 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: ]] 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDlkZTgxOTVkMDJiMjY5NjllMjVhODY3M2VlYTRlYzgzYWMzNDIzMWM5Nzk4MWMwM2Q0NmFmMGVkNjM5NmY2M7vkJCw=: 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.596 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.532 nvme0n1 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:32:08.532 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.533 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.469 nvme0n1 00:32:09.469 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.469 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUzMGUzZjJhZDQ4YWU5NGEwMDMxYzI1Yzk2NDMwMmMj9hN8: 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: ]] 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc0OTRhNDcwZThkODhiNzlmMjYyNDdiOTljOTQ1NWEOqO/E: 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.470 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.404 nvme0n1 00:32:10.404 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.404 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.404 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.404 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.404 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.404 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.404 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.404 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.404 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.404 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2YyYTYyNjEzYjE2ODQxODQyYTQzN2UzNzVkMGUzZDExNmM5ZWUwMzdhNDdiYzIyiB/tog==: 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: ]] 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjhhZGVkZDRlNmVkYTRjNTU4ZjExY2M5YzRjYzBlYjHuFMY5: 00:32:10.662 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.663 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.601 nvme0n1 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViNGQ2MDc0YzdmYzQwMTFkYTRiNzdmZDU0YmQxZTgwYzI2ZTA2N2E4N2FlYThhZWU0OTE1OTVmMjBjMTViNinPiMw=: 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.601 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.539 nvme0n1 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhjNWFhNWRhODkxNTRkN2NkM2I0MWM0MWY1OWM4MWU0Yzk5MDk0MTViODdmOGM2gTN24Q==: 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: ]] 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyZjA2MGIzYjJhOWFjYTBhMDE4NTQ5N2U0MmQ3NDQ3M2QxMGFhZjUwM2Q5MjYwjxbJFA==: 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.539 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.539 request: 00:32:12.539 { 00:32:12.539 "name": "nvme0", 00:32:12.539 "trtype": "tcp", 00:32:12.539 "traddr": "10.0.0.1", 00:32:12.539 "adrfam": "ipv4", 00:32:12.540 "trsvcid": "4420", 00:32:12.540 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:12.540 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:12.540 "prchk_reftag": false, 00:32:12.540 "prchk_guard": false, 00:32:12.540 "hdgst": false, 00:32:12.540 "ddgst": false, 00:32:12.540 "method": "bdev_nvme_attach_controller", 00:32:12.540 "req_id": 1 00:32:12.540 } 00:32:12.540 Got JSON-RPC error response 00:32:12.540 response: 00:32:12.540 { 00:32:12.540 "code": -5, 00:32:12.540 "message": "Input/output error" 00:32:12.540 } 00:32:12.540 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:12.540 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:12.540 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:12.540 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:12.540 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:12.540 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.540 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:12.540 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.540 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.540 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.540 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:12.540 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:12.540 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.798 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.798 request: 00:32:12.799 { 00:32:12.799 "name": "nvme0", 00:32:12.799 "trtype": "tcp", 00:32:12.799 "traddr": "10.0.0.1", 00:32:12.799 "adrfam": "ipv4", 00:32:12.799 "trsvcid": "4420", 00:32:12.799 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:12.799 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:12.799 "prchk_reftag": false, 00:32:12.799 "prchk_guard": false, 00:32:12.799 "hdgst": false, 00:32:12.799 "ddgst": false, 00:32:12.799 "dhchap_key": "key2", 00:32:12.799 "method": "bdev_nvme_attach_controller", 00:32:12.799 "req_id": 1 00:32:12.799 } 00:32:12.799 Got JSON-RPC error response 00:32:12.799 response: 00:32:12.799 { 00:32:12.799 "code": -5, 00:32:12.799 "message": "Input/output error" 00:32:12.799 } 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.799 request: 00:32:12.799 { 00:32:12.799 "name": "nvme0", 00:32:12.799 "trtype": "tcp", 00:32:12.799 "traddr": "10.0.0.1", 00:32:12.799 "adrfam": "ipv4", 00:32:12.799 "trsvcid": "4420", 00:32:12.799 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:12.799 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:12.799 "prchk_reftag": false, 00:32:12.799 "prchk_guard": false, 00:32:12.799 "hdgst": false, 00:32:12.799 "ddgst": false, 00:32:12.799 "dhchap_key": "key1", 00:32:12.799 "dhchap_ctrlr_key": "ckey2", 00:32:12.799 "method": "bdev_nvme_attach_controller", 00:32:12.799 "req_id": 1 00:32:12.799 } 00:32:12.799 Got JSON-RPC error response 00:32:12.799 response: 00:32:12.799 { 00:32:12.799 "code": -5, 00:32:12.799 "message": "Input/output error" 00:32:12.799 } 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:12.799 rmmod nvme_tcp 00:32:12.799 rmmod nvme_fabrics 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1600370 ']' 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1600370 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1600370 ']' 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1600370 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:12.799 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1600370 00:32:13.075 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:13.075 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:13.075 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1600370' 00:32:13.075 killing process with pid 1600370 00:32:13.075 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1600370 00:32:13.075 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1600370 00:32:13.075 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:13.075 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:13.075 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:13.075 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:13.075 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:13.075 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.075 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:13.075 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.615 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:15.615 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:15.615 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:15.615 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:15.615 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:15.615 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:15.615 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:15.615 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:15.615 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:15.615 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:15.615 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:15.615 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:15.615 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:16.550 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:16.550 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:16.550 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:16.550 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:16.550 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:16.550 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:16.550 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:16.550 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:16.550 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:16.550 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:16.550 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:16.550 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:16.550 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:16.550 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:16.550 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:16.550 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:17.485 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:17.485 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.dHU /tmp/spdk.key-null.9Wv /tmp/spdk.key-sha256.P9v /tmp/spdk.key-sha384.zEM /tmp/spdk.key-sha512.zKe /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:17.485 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:18.889 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:18.889 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:18.889 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:18.889 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:18.889 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:18.889 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:18.889 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:18.889 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:18.889 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:18.889 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:18.889 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:18.889 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:18.889 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:18.889 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:18.889 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:18.889 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:18.889 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:18.889 00:32:18.889 real 0m49.777s 00:32:18.889 user 0m47.556s 00:32:18.889 sys 0m5.844s 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.889 ************************************ 00:32:18.889 END TEST nvmf_auth_host 00:32:18.889 ************************************ 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.889 ************************************ 00:32:18.889 START TEST nvmf_digest 00:32:18.889 ************************************ 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:18.889 * Looking for test storage... 00:32:18.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:18.889 18:32:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:20.794 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.794 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:20.794 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:20.794 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:20.794 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:20.794 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:20.795 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:20.795 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:20.795 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:20.795 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:20.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:32:20.795 00:32:20.795 --- 10.0.0.2 ping statistics --- 00:32:20.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.795 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:32:20.795 00:32:20.795 --- 10.0.0.1 ping statistics --- 00:32:20.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.795 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:20.795 ************************************ 00:32:20.795 START TEST nvmf_digest_clean 00:32:20.795 ************************************ 00:32:20.795 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1609753 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1609753 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1609753 ']' 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:20.796 18:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:20.796 [2024-07-26 18:32:46.922840] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:20.796 [2024-07-26 18:32:46.922914] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.054 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.054 [2024-07-26 18:32:46.962165] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:21.054 [2024-07-26 18:32:46.992503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.054 [2024-07-26 18:32:47.083042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:21.054 [2024-07-26 18:32:47.083121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:21.054 [2024-07-26 18:32:47.083138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:21.054 [2024-07-26 18:32:47.083152] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:21.054 [2024-07-26 18:32:47.083164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:21.054 [2024-07-26 18:32:47.083197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.054 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:21.054 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:21.054 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:21.054 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:21.054 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:21.312 null0 00:32:21.312 [2024-07-26 18:32:47.329655] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.312 [2024-07-26 18:32:47.353908] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1609875 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1609875 /var/tmp/bperf.sock 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1609875 ']' 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:21.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:21.312 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:21.312 [2024-07-26 18:32:47.404990] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:21.312 [2024-07-26 18:32:47.405070] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609875 ] 00:32:21.312 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.313 [2024-07-26 18:32:47.438860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:21.570 [2024-07-26 18:32:47.470855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.570 [2024-07-26 18:32:47.561404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.570 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:21.570 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:21.570 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:21.570 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:21.570 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:21.828 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.828 18:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:22.395 nvme0n1 00:32:22.395 18:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:22.395 18:32:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:22.653 Running I/O for 2 seconds... 00:32:24.557 00:32:24.557 Latency(us) 00:32:24.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.557 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:24.557 nvme0n1 : 2.01 19672.77 76.85 0.00 0.00 6496.59 3301.07 11845.03 00:32:24.557 =================================================================================================================== 00:32:24.557 Total : 19672.77 76.85 0.00 0.00 6496.59 3301.07 11845.03 00:32:24.557 0 00:32:24.557 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:24.557 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:24.557 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:24.557 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:24.557 | select(.opcode=="crc32c") 00:32:24.557 | "\(.module_name) \(.executed)"' 00:32:24.557 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1609875 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1609875 ']' 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1609875 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1609875 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1609875' 00:32:24.817 killing process with pid 1609875 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1609875 00:32:24.817 Received shutdown signal, test time was about 2.000000 seconds 00:32:24.817 00:32:24.817 Latency(us) 00:32:24.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.817 =================================================================================================================== 00:32:24.817 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:24.817 18:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1609875 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1610281 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1610281 /var/tmp/bperf.sock 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1610281 ']' 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:25.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:25.076 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:25.076 [2024-07-26 18:32:51.128045] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:25.076 [2024-07-26 18:32:51.128140] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610281 ] 00:32:25.076 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:25.076 Zero copy mechanism will not be used. 00:32:25.076 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.076 [2024-07-26 18:32:51.159681] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:25.076 [2024-07-26 18:32:51.192506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.334 [2024-07-26 18:32:51.281442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.334 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:25.334 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:25.334 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:25.334 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:25.334 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:25.592 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:25.592 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:26.159 nvme0n1 00:32:26.159 18:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:26.159 18:32:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:26.159 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:26.159 Zero copy mechanism will not be used. 00:32:26.159 Running I/O for 2 seconds... 00:32:28.067 00:32:28.067 Latency(us) 00:32:28.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.067 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:28.067 nvme0n1 : 2.00 2919.51 364.94 0.00 0.00 5475.91 4223.43 7330.32 00:32:28.067 =================================================================================================================== 00:32:28.067 Total : 2919.51 364.94 0.00 0.00 5475.91 4223.43 7330.32 00:32:28.067 0 00:32:28.067 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:28.067 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:28.067 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:28.067 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:28.067 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:28.067 | select(.opcode=="crc32c") 00:32:28.067 | "\(.module_name) \(.executed)"' 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1610281 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1610281 ']' 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1610281 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1610281 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1610281' 00:32:28.327 killing process with pid 1610281 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1610281 00:32:28.327 Received shutdown signal, test time was about 2.000000 seconds 00:32:28.327 00:32:28.327 Latency(us) 00:32:28.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.327 =================================================================================================================== 00:32:28.327 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:28.327 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1610281 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1610688 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1610688 /var/tmp/bperf.sock 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1610688 ']' 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:28.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:28.586 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:28.586 [2024-07-26 18:32:54.698871] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:28.586 [2024-07-26 18:32:54.698960] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610688 ] 00:32:28.586 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.844 [2024-07-26 18:32:54.732643] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:28.844 [2024-07-26 18:32:54.760540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.844 [2024-07-26 18:32:54.851670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.844 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:28.844 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:28.844 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:28.844 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:28.844 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:29.412 18:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:29.412 18:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:29.671 nvme0n1 00:32:29.671 18:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:29.671 18:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:29.930 Running I/O for 2 seconds... 00:32:31.834 00:32:31.834 Latency(us) 00:32:31.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.834 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:31.834 nvme0n1 : 2.01 18682.25 72.98 0.00 0.00 6835.60 3713.71 10388.67 00:32:31.834 =================================================================================================================== 00:32:31.834 Total : 18682.25 72.98 0.00 0.00 6835.60 3713.71 10388.67 00:32:31.834 0 00:32:31.834 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:31.834 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:31.834 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:31.834 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:31.834 | select(.opcode=="crc32c") 00:32:31.834 | "\(.module_name) \(.executed)"' 00:32:31.834 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1610688 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1610688 ']' 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1610688 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1610688 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1610688' 00:32:32.094 killing process with pid 1610688 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1610688 00:32:32.094 Received shutdown signal, test time was about 2.000000 seconds 00:32:32.094 00:32:32.094 Latency(us) 00:32:32.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.094 =================================================================================================================== 00:32:32.094 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:32.094 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1610688 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1611170 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1611170 /var/tmp/bperf.sock 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1611170 ']' 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:32.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:32.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:32.354 [2024-07-26 18:32:58.445928] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:32.354 [2024-07-26 18:32:58.446016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611170 ] 00:32:32.354 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:32.354 Zero copy mechanism will not be used. 00:32:32.354 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.354 [2024-07-26 18:32:58.484173] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:32.613 [2024-07-26 18:32:58.514873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.613 [2024-07-26 18:32:58.606679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.613 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:32.613 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:32.613 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:32.613 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:32.613 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:33.181 18:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:33.181 18:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:33.439 nvme0n1 00:32:33.439 18:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:33.439 18:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:33.697 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:33.697 Zero copy mechanism will not be used. 00:32:33.697 Running I/O for 2 seconds... 00:32:35.597 00:32:35.597 Latency(us) 00:32:35.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.597 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:35.597 nvme0n1 : 2.01 1764.03 220.50 0.00 0.00 9046.45 7087.60 19029.71 00:32:35.597 =================================================================================================================== 00:32:35.597 Total : 1764.03 220.50 0.00 0.00 9046.45 7087.60 19029.71 00:32:35.597 0 00:32:35.597 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:35.597 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:35.597 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:35.597 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:35.597 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:35.597 | select(.opcode=="crc32c") 00:32:35.597 | "\(.module_name) \(.executed)"' 00:32:35.855 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:35.855 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:35.855 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:35.855 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:35.855 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1611170 00:32:35.855 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1611170 ']' 00:32:35.855 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1611170 00:32:35.855 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:35.855 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:35.855 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1611170 00:32:35.855 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:35.855 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:35.855 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1611170' 00:32:35.855 killing process with pid 1611170 00:32:35.856 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1611170 00:32:35.856 Received shutdown signal, test time was about 2.000000 seconds 00:32:35.856 00:32:35.856 Latency(us) 00:32:35.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.856 =================================================================================================================== 00:32:35.856 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:35.856 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1611170 00:32:36.113 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1609753 00:32:36.113 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1609753 ']' 00:32:36.114 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1609753 00:32:36.114 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:36.114 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:36.114 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1609753 00:32:36.114 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:36.114 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:36.114 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1609753' 00:32:36.114 killing process with pid 1609753 00:32:36.114 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1609753 00:32:36.114 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1609753 00:32:36.372 00:32:36.372 real 0m15.480s 00:32:36.372 user 0m30.663s 00:32:36.372 sys 0m4.180s 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:36.372 ************************************ 00:32:36.372 END TEST nvmf_digest_clean 00:32:36.372 ************************************ 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:36.372 ************************************ 00:32:36.372 START TEST nvmf_digest_error 00:32:36.372 ************************************ 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1611765 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1611765 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1611765 ']' 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:36.372 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:36.372 [2024-07-26 18:33:02.452807] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:36.372 [2024-07-26 18:33:02.452890] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.372 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.372 [2024-07-26 18:33:02.490271] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:36.659 [2024-07-26 18:33:02.516578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.660 [2024-07-26 18:33:02.600199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.660 [2024-07-26 18:33:02.600255] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.660 [2024-07-26 18:33:02.600278] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.660 [2024-07-26 18:33:02.600289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.660 [2024-07-26 18:33:02.600299] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.660 [2024-07-26 18:33:02.600324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:36.660 [2024-07-26 18:33:02.680884] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.660 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:36.923 null0 00:32:36.923 [2024-07-26 18:33:02.793006] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.923 [2024-07-26 18:33:02.817206] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1611791 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1611791 /var/tmp/bperf.sock 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1611791 ']' 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:36.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:36.923 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:36.923 [2024-07-26 18:33:02.863030] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:36.923 [2024-07-26 18:33:02.863144] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611791 ] 00:32:36.923 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.923 [2024-07-26 18:33:02.897401] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:36.923 [2024-07-26 18:33:02.930432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.923 [2024-07-26 18:33:03.025221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.182 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:37.182 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:37.182 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:37.182 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:37.440 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:37.440 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.440 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:37.440 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.440 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:37.440 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:37.699 nvme0n1 00:32:37.959 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:37.959 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.959 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:37.959 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.959 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:37.959 18:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:37.959 Running I/O for 2 seconds... 00:32:37.959 [2024-07-26 18:33:03.993489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:37.959 [2024-07-26 18:33:03.993549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.959 [2024-07-26 18:33:03.993572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.959 [2024-07-26 18:33:04.017705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:37.959 [2024-07-26 18:33:04.017745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.959 [2024-07-26 18:33:04.017765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.959 [2024-07-26 18:33:04.039780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:37.959 [2024-07-26 18:33:04.039813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.959 [2024-07-26 18:33:04.039831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.959 [2024-07-26 18:33:04.061374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:37.959 [2024-07-26 18:33:04.061421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.959 [2024-07-26 18:33:04.061438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.959 [2024-07-26 18:33:04.079764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:37.959 [2024-07-26 18:33:04.079797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.959 [2024-07-26 18:33:04.079814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.959 [2024-07-26 18:33:04.094406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:37.959 [2024-07-26 18:33:04.094437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.959 [2024-07-26 18:33:04.094453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.219 [2024-07-26 18:33:04.116398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.219 [2024-07-26 18:33:04.116430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.219 [2024-07-26 18:33:04.116452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.219 [2024-07-26 18:33:04.136506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.219 [2024-07-26 18:33:04.136537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.219 [2024-07-26 18:33:04.136555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.219 [2024-07-26 18:33:04.156888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.219 [2024-07-26 18:33:04.156929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.219 [2024-07-26 18:33:04.156947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.219 [2024-07-26 18:33:04.172345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.219 [2024-07-26 18:33:04.172392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.219 [2024-07-26 18:33:04.172409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.219 [2024-07-26 18:33:04.194002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.219 [2024-07-26 18:33:04.194034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.219 [2024-07-26 18:33:04.194051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.219 [2024-07-26 18:33:04.214867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.219 [2024-07-26 18:33:04.214898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.219 [2024-07-26 18:33:04.214914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.219 [2024-07-26 18:33:04.235018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.219 [2024-07-26 18:33:04.235071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.219 [2024-07-26 18:33:04.235105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.219 [2024-07-26 18:33:04.255477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.219 [2024-07-26 18:33:04.255510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.219 [2024-07-26 18:33:04.255528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.219 [2024-07-26 18:33:04.270799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.219 [2024-07-26 18:33:04.270830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.219 [2024-07-26 18:33:04.270848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.219 [2024-07-26 18:33:04.292863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.219 [2024-07-26 18:33:04.292895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.219 [2024-07-26 18:33:04.292912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.219 [2024-07-26 18:33:04.313870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.219 [2024-07-26 18:33:04.313901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.219 [2024-07-26 18:33:04.313919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.219 [2024-07-26 18:33:04.333905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.219 [2024-07-26 18:33:04.333937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.219 [2024-07-26 18:33:04.333969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.219 [2024-07-26 18:33:04.354329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.219 [2024-07-26 18:33:04.354377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.219 [2024-07-26 18:33:04.354396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.480 [2024-07-26 18:33:04.370564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.480 [2024-07-26 18:33:04.370596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.480 [2024-07-26 18:33:04.370614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.480 [2024-07-26 18:33:04.391783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.480 [2024-07-26 18:33:04.391814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.480 [2024-07-26 18:33:04.391831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.480 [2024-07-26 18:33:04.412846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.480 [2024-07-26 18:33:04.412877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.480 [2024-07-26 18:33:04.412894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.480 [2024-07-26 18:33:04.432997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.480 [2024-07-26 18:33:04.433030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.480 [2024-07-26 18:33:04.433069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.480 [2024-07-26 18:33:04.448189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.480 [2024-07-26 18:33:04.448220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.480 [2024-07-26 18:33:04.448238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.480 [2024-07-26 18:33:04.468207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.480 [2024-07-26 18:33:04.468254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.480 [2024-07-26 18:33:04.468272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.480 [2024-07-26 18:33:04.488546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.480 [2024-07-26 18:33:04.488578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.480 [2024-07-26 18:33:04.488603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.480 [2024-07-26 18:33:04.510139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.480 [2024-07-26 18:33:04.510171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.480 [2024-07-26 18:33:04.510189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.480 [2024-07-26 18:33:04.530138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.480 [2024-07-26 18:33:04.530876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.480 [2024-07-26 18:33:04.531011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.480 [2024-07-26 18:33:04.544195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.480 [2024-07-26 18:33:04.544227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.480 [2024-07-26 18:33:04.544245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.480 [2024-07-26 18:33:04.565660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.480 [2024-07-26 18:33:04.565692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.480 [2024-07-26 18:33:04.565709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.480 [2024-07-26 18:33:04.586679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.480 [2024-07-26 18:33:04.586711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.480 [2024-07-26 18:33:04.586729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.480 [2024-07-26 18:33:04.606579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.480 [2024-07-26 18:33:04.606611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.480 [2024-07-26 18:33:04.606628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.741 [2024-07-26 18:33:04.625653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.741 [2024-07-26 18:33:04.625685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.741 [2024-07-26 18:33:04.625704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.741 [2024-07-26 18:33:04.646588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.741 [2024-07-26 18:33:04.646619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.741 [2024-07-26 18:33:04.646637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.741 [2024-07-26 18:33:04.661817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.741 [2024-07-26 18:33:04.661848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.741 [2024-07-26 18:33:04.661866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.741 [2024-07-26 18:33:04.683384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.741 [2024-07-26 18:33:04.683417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.741 [2024-07-26 18:33:04.683435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.741 [2024-07-26 18:33:04.705257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.741 [2024-07-26 18:33:04.705289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.741 [2024-07-26 18:33:04.705308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.741 [2024-07-26 18:33:04.726954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.741 [2024-07-26 18:33:04.726986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.741 [2024-07-26 18:33:04.727003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.741 [2024-07-26 18:33:04.748521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.741 [2024-07-26 18:33:04.748553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.741 [2024-07-26 18:33:04.748571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.741 [2024-07-26 18:33:04.769375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.741 [2024-07-26 18:33:04.769407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.741 [2024-07-26 18:33:04.769426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.741 [2024-07-26 18:33:04.787616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.741 [2024-07-26 18:33:04.787647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.741 [2024-07-26 18:33:04.787664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.741 [2024-07-26 18:33:04.803598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.741 [2024-07-26 18:33:04.803641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.741 [2024-07-26 18:33:04.803659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.741 [2024-07-26 18:33:04.826230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.741 [2024-07-26 18:33:04.826262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.741 [2024-07-26 18:33:04.826287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.741 [2024-07-26 18:33:04.846919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.741 [2024-07-26 18:33:04.846952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.741 [2024-07-26 18:33:04.846984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.741 [2024-07-26 18:33:04.868454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:38.741 [2024-07-26 18:33:04.868486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.741 [2024-07-26 18:33:04.868503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:04.886101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:04.886135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:04.886153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:04.900951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:04.900983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:04.901000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:04.921796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:04.921828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:04.921845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:04.942611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:04.942644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:04.942661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:04.961731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:04.961763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:04.961781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:04.976758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:04.976790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:04.976807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:04.996178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:04.996216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:04.996234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:05.018327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:05.018375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:05.018393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:05.039665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:05.039697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:05.039714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:05.060527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:05.060559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:05.060577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:05.079532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:05.079563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:05.079580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:05.101221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:05.101253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:05.101271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:05.116034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:05.116087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:05.116107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.001 [2024-07-26 18:33:05.137204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.001 [2024-07-26 18:33:05.137236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.001 [2024-07-26 18:33:05.137254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.261 [2024-07-26 18:33:05.159039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.261 [2024-07-26 18:33:05.159094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.262 [2024-07-26 18:33:05.159112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.262 [2024-07-26 18:33:05.180108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.262 [2024-07-26 18:33:05.180155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.262 [2024-07-26 18:33:05.180172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.262 [2024-07-26 18:33:05.203728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.262 [2024-07-26 18:33:05.203765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.262 [2024-07-26 18:33:05.203785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.262 [2024-07-26 18:33:05.220217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.262 [2024-07-26 18:33:05.220248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.262 [2024-07-26 18:33:05.220265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.262 [2024-07-26 18:33:05.242184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.262 [2024-07-26 18:33:05.242216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.262 [2024-07-26 18:33:05.242232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.262 [2024-07-26 18:33:05.266249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.262 [2024-07-26 18:33:05.266282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.262 [2024-07-26 18:33:05.266300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.262 [2024-07-26 18:33:05.288435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.262 [2024-07-26 18:33:05.288472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.262 [2024-07-26 18:33:05.288493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.262 [2024-07-26 18:33:05.313212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.262 [2024-07-26 18:33:05.313244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.262 [2024-07-26 18:33:05.313260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.262 [2024-07-26 18:33:05.334758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.262 [2024-07-26 18:33:05.334795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.262 [2024-07-26 18:33:05.334814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.262 [2024-07-26 18:33:05.350878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.262 [2024-07-26 18:33:05.350915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.262 [2024-07-26 18:33:05.350941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.262 [2024-07-26 18:33:05.374687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.262 [2024-07-26 18:33:05.374724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.262 [2024-07-26 18:33:05.374744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.262 [2024-07-26 18:33:05.397072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.262 [2024-07-26 18:33:05.397109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.262 [2024-07-26 18:33:05.397141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.522 [2024-07-26 18:33:05.420236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.522 [2024-07-26 18:33:05.420270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.522 [2024-07-26 18:33:05.420288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.522 [2024-07-26 18:33:05.440349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.522 [2024-07-26 18:33:05.440381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.522 [2024-07-26 18:33:05.440398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.522 [2024-07-26 18:33:05.464245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.522 [2024-07-26 18:33:05.464276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.522 [2024-07-26 18:33:05.464293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.522 [2024-07-26 18:33:05.480738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.522 [2024-07-26 18:33:05.480775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.522 [2024-07-26 18:33:05.480795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.522 [2024-07-26 18:33:05.503918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.522 [2024-07-26 18:33:05.503955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.522 [2024-07-26 18:33:05.503976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.522 [2024-07-26 18:33:05.527645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.522 [2024-07-26 18:33:05.527682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.522 [2024-07-26 18:33:05.527703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.522 [2024-07-26 18:33:05.549793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.522 [2024-07-26 18:33:05.549830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.522 [2024-07-26 18:33:05.550441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.522 [2024-07-26 18:33:05.565540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.522 [2024-07-26 18:33:05.565577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.522 [2024-07-26 18:33:05.565598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.522 [2024-07-26 18:33:05.588319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.522 [2024-07-26 18:33:05.588364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.522 [2024-07-26 18:33:05.588380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.522 [2024-07-26 18:33:05.611658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.522 [2024-07-26 18:33:05.611695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.522 [2024-07-26 18:33:05.611716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.522 [2024-07-26 18:33:05.632785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.522 [2024-07-26 18:33:05.632822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.522 [2024-07-26 18:33:05.632842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.522 [2024-07-26 18:33:05.657695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.522 [2024-07-26 18:33:05.657732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.522 [2024-07-26 18:33:05.657752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.782 [2024-07-26 18:33:05.677845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.782 [2024-07-26 18:33:05.677882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.782 [2024-07-26 18:33:05.677902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.782 [2024-07-26 18:33:05.695128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.782 [2024-07-26 18:33:05.695159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.782 [2024-07-26 18:33:05.695175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.782 [2024-07-26 18:33:05.718281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.782 [2024-07-26 18:33:05.718312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.782 [2024-07-26 18:33:05.718350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.782 [2024-07-26 18:33:05.742011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.782 [2024-07-26 18:33:05.742048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.783 [2024-07-26 18:33:05.742079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.783 [2024-07-26 18:33:05.765327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.783 [2024-07-26 18:33:05.765373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.783 [2024-07-26 18:33:05.765394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.783 [2024-07-26 18:33:05.785460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.783 [2024-07-26 18:33:05.785498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.783 [2024-07-26 18:33:05.785518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.783 [2024-07-26 18:33:05.801966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.783 [2024-07-26 18:33:05.802003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.783 [2024-07-26 18:33:05.802023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.783 [2024-07-26 18:33:05.825465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.783 [2024-07-26 18:33:05.825503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.783 [2024-07-26 18:33:05.825524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.783 [2024-07-26 18:33:05.848912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.783 [2024-07-26 18:33:05.848950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.783 [2024-07-26 18:33:05.848971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.783 [2024-07-26 18:33:05.871937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.783 [2024-07-26 18:33:05.871976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.783 [2024-07-26 18:33:05.871996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.783 [2024-07-26 18:33:05.895077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.783 [2024-07-26 18:33:05.895125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.783 [2024-07-26 18:33:05.895142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.783 [2024-07-26 18:33:05.918381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:39.783 [2024-07-26 18:33:05.918425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.783 [2024-07-26 18:33:05.918445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.042 [2024-07-26 18:33:05.940659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:40.042 [2024-07-26 18:33:05.940697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.042 [2024-07-26 18:33:05.940717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.042 [2024-07-26 18:33:05.956973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:40.042 [2024-07-26 18:33:05.957010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.042 [2024-07-26 18:33:05.957031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.042 [2024-07-26 18:33:05.978090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110e280) 00:32:40.042 [2024-07-26 18:33:05.978140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.042 [2024-07-26 18:33:05.978157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.042 00:32:40.042 Latency(us) 00:32:40.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.042 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:40.042 nvme0n1 : 2.06 12207.73 47.69 0.00 0.00 10262.74 4150.61 64856.37 00:32:40.042 =================================================================================================================== 00:32:40.042 Total : 12207.73 47.69 0.00 0.00 10262.74 4150.61 64856.37 00:32:40.042 0 00:32:40.042 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:40.042 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:40.042 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:40.042 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:40.042 | .driver_specific 00:32:40.042 | .nvme_error 00:32:40.042 | .status_code 00:32:40.042 | .command_transient_transport_error' 00:32:40.301 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 98 > 0 )) 00:32:40.301 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1611791 00:32:40.301 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1611791 ']' 00:32:40.301 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1611791 00:32:40.301 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:40.301 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:40.301 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1611791 00:32:40.301 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:40.301 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:40.301 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1611791' 00:32:40.301 killing process with pid 1611791 00:32:40.301 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1611791 00:32:40.301 Received shutdown signal, test time was about 2.000000 seconds 00:32:40.301 00:32:40.301 Latency(us) 00:32:40.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.301 =================================================================================================================== 00:32:40.301 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:40.301 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1611791 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1612318 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1612318 /var/tmp/bperf.sock 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1612318 ']' 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:40.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:40.559 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:40.559 [2024-07-26 18:33:06.612745] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:40.559 [2024-07-26 18:33:06.612822] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612318 ] 00:32:40.559 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:40.559 Zero copy mechanism will not be used. 00:32:40.559 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.559 [2024-07-26 18:33:06.644820] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:40.559 [2024-07-26 18:33:06.678561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.817 [2024-07-26 18:33:06.770506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.817 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:40.817 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:40.817 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:40.817 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:41.075 18:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:41.075 18:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.075 18:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:41.075 18:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.075 18:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:41.075 18:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:41.642 nvme0n1 00:32:41.642 18:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:41.642 18:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.642 18:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:41.642 18:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.642 18:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:41.642 18:33:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:41.901 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:41.901 Zero copy mechanism will not be used. 00:32:41.901 Running I/O for 2 seconds... 00:32:41.901 [2024-07-26 18:33:07.810295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:07.810371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:07.810394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:07.826566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:07.826605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:07.826625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:07.843151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:07.843184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:07.843201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:07.859303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:07.859350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:07.859379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:07.875359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:07.875409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:07.875439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:07.891879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:07.891916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:07.891936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:07.908334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:07.908391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:07.908408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:07.924890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:07.924928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:07.924948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:07.940915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:07.940952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:07.940972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:07.957640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:07.957677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:07.957697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:07.973909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:07.973945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:07.973965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:07.989822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:07.989860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:07.989880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:08.005778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:08.005815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:08.005836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:08.021882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:08.021921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:08.021941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.901 [2024-07-26 18:33:08.038235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:41.901 [2024-07-26 18:33:08.038283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.901 [2024-07-26 18:33:08.038301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.054670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.054707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.054727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.070901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.070938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.070958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.087290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.087324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.087356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.103435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.103471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.103491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.119300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.119358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.119393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.135470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.135507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.135527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.151105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.151156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.151179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.167244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.167276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.167293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.182894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.182932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.182953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.198612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.198649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.198671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.214694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.214731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.214751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.230467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.230504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.230524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.246858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.246896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.246917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.263346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.263407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.263427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.279121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.279153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.279170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.160 [2024-07-26 18:33:08.295023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.160 [2024-07-26 18:33:08.295072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.160 [2024-07-26 18:33:08.295110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.312868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.312906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.312926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.328387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.328438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.328458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.344378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.344416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.344436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.360664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.360702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.360722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.376437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.376475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.376495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.392571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.392609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.392628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.408307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.408338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.408369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.424083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.424129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.424146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.440328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.440360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.440393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.456378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.456415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.456435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.472773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.472811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.472830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.489243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.489274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.489306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.505079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.505129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.505148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.521544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.521584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.419 [2024-07-26 18:33:08.521604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.419 [2024-07-26 18:33:08.538109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.419 [2024-07-26 18:33:08.538140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.420 [2024-07-26 18:33:08.538156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.420 [2024-07-26 18:33:08.554149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.420 [2024-07-26 18:33:08.554181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.420 [2024-07-26 18:33:08.554198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.677 [2024-07-26 18:33:08.570619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.677 [2024-07-26 18:33:08.570658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.677 [2024-07-26 18:33:08.570686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.677 [2024-07-26 18:33:08.586810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.677 [2024-07-26 18:33:08.586850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.677 [2024-07-26 18:33:08.586881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.677 [2024-07-26 18:33:08.602683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.677 [2024-07-26 18:33:08.602717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.677 [2024-07-26 18:33:08.602736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.677 [2024-07-26 18:33:08.618932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.677 [2024-07-26 18:33:08.618971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.677 [2024-07-26 18:33:08.618999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.677 [2024-07-26 18:33:08.634732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.677 [2024-07-26 18:33:08.634770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.678 [2024-07-26 18:33:08.634796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.678 [2024-07-26 18:33:08.650215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.678 [2024-07-26 18:33:08.650249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.678 [2024-07-26 18:33:08.650267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.678 [2024-07-26 18:33:08.665744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.678 [2024-07-26 18:33:08.665781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.678 [2024-07-26 18:33:08.665800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.678 [2024-07-26 18:33:08.681717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.678 [2024-07-26 18:33:08.681763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.678 [2024-07-26 18:33:08.681783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.678 [2024-07-26 18:33:08.698040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.678 [2024-07-26 18:33:08.698110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.678 [2024-07-26 18:33:08.698128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.678 [2024-07-26 18:33:08.714760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.678 [2024-07-26 18:33:08.714804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.678 [2024-07-26 18:33:08.714825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.678 [2024-07-26 18:33:08.731254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.678 [2024-07-26 18:33:08.731284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.678 [2024-07-26 18:33:08.731301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.678 [2024-07-26 18:33:08.747432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.678 [2024-07-26 18:33:08.747461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.678 [2024-07-26 18:33:08.747498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.678 [2024-07-26 18:33:08.763416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.678 [2024-07-26 18:33:08.763454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.678 [2024-07-26 18:33:08.763473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.678 [2024-07-26 18:33:08.779341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.678 [2024-07-26 18:33:08.779381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.678 [2024-07-26 18:33:08.779422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.678 [2024-07-26 18:33:08.795435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.678 [2024-07-26 18:33:08.795475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.678 [2024-07-26 18:33:08.795506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.678 [2024-07-26 18:33:08.811646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.678 [2024-07-26 18:33:08.811684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.678 [2024-07-26 18:33:08.811714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:08.828159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:08.828190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:08.828206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:08.844180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:08.844210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:08.844235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:08.860481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:08.860519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:08.860539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:08.877111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:08.877142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:08.877161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:08.893164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:08.893196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:08.893216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:08.909298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:08.909329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:08.909345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:08.925645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:08.925683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:08.925703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:08.941636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:08.941674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:08.941705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:08.957414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:08.957451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:08.957481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:08.972592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:08.972629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:08.972649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:08.987268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:08.987299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:08.987327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:09.001823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:09.001859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:09.001879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:09.018085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:09.018128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:09.018144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:09.033576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:09.033614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:09.033634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:09.048213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:09.048247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:09.048265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:09.063821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:09.063859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:09.063885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.936 [2024-07-26 18:33:09.079134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:42.936 [2024-07-26 18:33:09.079180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.936 [2024-07-26 18:33:09.079197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.195 [2024-07-26 18:33:09.093905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.195 [2024-07-26 18:33:09.093943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.195 [2024-07-26 18:33:09.093968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.195 [2024-07-26 18:33:09.108758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.195 [2024-07-26 18:33:09.108796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.195 [2024-07-26 18:33:09.108820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.195 [2024-07-26 18:33:09.122971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.195 [2024-07-26 18:33:09.123009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.195 [2024-07-26 18:33:09.123028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.195 [2024-07-26 18:33:09.137183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.195 [2024-07-26 18:33:09.137214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.195 [2024-07-26 18:33:09.137231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.195 [2024-07-26 18:33:09.152962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.195 [2024-07-26 18:33:09.152999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.195 [2024-07-26 18:33:09.153019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.196 [2024-07-26 18:33:09.168289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.196 [2024-07-26 18:33:09.168320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.196 [2024-07-26 18:33:09.168337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.196 [2024-07-26 18:33:09.183180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.196 [2024-07-26 18:33:09.183227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.196 [2024-07-26 18:33:09.183245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.196 [2024-07-26 18:33:09.197970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.196 [2024-07-26 18:33:09.198007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.196 [2024-07-26 18:33:09.198032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.196 [2024-07-26 18:33:09.211853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.196 [2024-07-26 18:33:09.211889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.196 [2024-07-26 18:33:09.211909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.196 [2024-07-26 18:33:09.226995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.196 [2024-07-26 18:33:09.227032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.196 [2024-07-26 18:33:09.227052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.196 [2024-07-26 18:33:09.241596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.196 [2024-07-26 18:33:09.241633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.196 [2024-07-26 18:33:09.241660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.196 [2024-07-26 18:33:09.256175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.196 [2024-07-26 18:33:09.256208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.196 [2024-07-26 18:33:09.256225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.196 [2024-07-26 18:33:09.270909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.196 [2024-07-26 18:33:09.270946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.196 [2024-07-26 18:33:09.270967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.196 [2024-07-26 18:33:09.285888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.196 [2024-07-26 18:33:09.285936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.196 [2024-07-26 18:33:09.285956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.196 [2024-07-26 18:33:09.300245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.196 [2024-07-26 18:33:09.300277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.196 [2024-07-26 18:33:09.300297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.196 [2024-07-26 18:33:09.313552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.196 [2024-07-26 18:33:09.313583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.196 [2024-07-26 18:33:09.313607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.196 [2024-07-26 18:33:09.328355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.196 [2024-07-26 18:33:09.328402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.196 [2024-07-26 18:33:09.328423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.455 [2024-07-26 18:33:09.342597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.455 [2024-07-26 18:33:09.342629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.455 [2024-07-26 18:33:09.342654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.455 [2024-07-26 18:33:09.356756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.455 [2024-07-26 18:33:09.356793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.455 [2024-07-26 18:33:09.356815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.455 [2024-07-26 18:33:09.373230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.455 [2024-07-26 18:33:09.373271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.455 [2024-07-26 18:33:09.373290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.455 [2024-07-26 18:33:09.388328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.455 [2024-07-26 18:33:09.388384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.455 [2024-07-26 18:33:09.388427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.455 [2024-07-26 18:33:09.402745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.455 [2024-07-26 18:33:09.402782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.455 [2024-07-26 18:33:09.402811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.455 [2024-07-26 18:33:09.417325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.455 [2024-07-26 18:33:09.417375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.455 [2024-07-26 18:33:09.417395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.455 [2024-07-26 18:33:09.433422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.455 [2024-07-26 18:33:09.433459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.455 [2024-07-26 18:33:09.433482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.455 [2024-07-26 18:33:09.447652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.455 [2024-07-26 18:33:09.447689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.455 [2024-07-26 18:33:09.447714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.455 [2024-07-26 18:33:09.462831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.455 [2024-07-26 18:33:09.462868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.455 [2024-07-26 18:33:09.462889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.455 [2024-07-26 18:33:09.478692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.456 [2024-07-26 18:33:09.478729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.456 [2024-07-26 18:33:09.478752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.456 [2024-07-26 18:33:09.494208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.456 [2024-07-26 18:33:09.494238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.456 [2024-07-26 18:33:09.494262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.456 [2024-07-26 18:33:09.508186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.456 [2024-07-26 18:33:09.508217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.456 [2024-07-26 18:33:09.508236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.456 [2024-07-26 18:33:09.522980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.456 [2024-07-26 18:33:09.523017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.456 [2024-07-26 18:33:09.523037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.456 [2024-07-26 18:33:09.537740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.456 [2024-07-26 18:33:09.537776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.456 [2024-07-26 18:33:09.537806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.456 [2024-07-26 18:33:09.552674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.456 [2024-07-26 18:33:09.552710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.456 [2024-07-26 18:33:09.552730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.456 [2024-07-26 18:33:09.568412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.456 [2024-07-26 18:33:09.568449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.456 [2024-07-26 18:33:09.568469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.456 [2024-07-26 18:33:09.583380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.456 [2024-07-26 18:33:09.583416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.456 [2024-07-26 18:33:09.583439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.456 [2024-07-26 18:33:09.598180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.456 [2024-07-26 18:33:09.598213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.456 [2024-07-26 18:33:09.598230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.714 [2024-07-26 18:33:09.612574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.714 [2024-07-26 18:33:09.612612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.714 [2024-07-26 18:33:09.612633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.715 [2024-07-26 18:33:09.628355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.715 [2024-07-26 18:33:09.628418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.715 [2024-07-26 18:33:09.628463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.715 [2024-07-26 18:33:09.642993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.715 [2024-07-26 18:33:09.643030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.715 [2024-07-26 18:33:09.643050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.715 [2024-07-26 18:33:09.657263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.715 [2024-07-26 18:33:09.657294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.715 [2024-07-26 18:33:09.657311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.715 [2024-07-26 18:33:09.671929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.715 [2024-07-26 18:33:09.671966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.715 [2024-07-26 18:33:09.671987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.715 [2024-07-26 18:33:09.686878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.715 [2024-07-26 18:33:09.686916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.715 [2024-07-26 18:33:09.686936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.715 [2024-07-26 18:33:09.701649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.715 [2024-07-26 18:33:09.701687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.715 [2024-07-26 18:33:09.701707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.715 [2024-07-26 18:33:09.716444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.715 [2024-07-26 18:33:09.716482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.715 [2024-07-26 18:33:09.716502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.715 [2024-07-26 18:33:09.731919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.715 [2024-07-26 18:33:09.731956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.715 [2024-07-26 18:33:09.731975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.715 [2024-07-26 18:33:09.745717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.715 [2024-07-26 18:33:09.745754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.715 [2024-07-26 18:33:09.745773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.715 [2024-07-26 18:33:09.760768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.715 [2024-07-26 18:33:09.760811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.715 [2024-07-26 18:33:09.760831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.715 [2024-07-26 18:33:09.776176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.715 [2024-07-26 18:33:09.776208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.715 [2024-07-26 18:33:09.776224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.715 [2024-07-26 18:33:09.791607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe07390) 00:32:43.715 [2024-07-26 18:33:09.791645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.715 [2024-07-26 18:33:09.791664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.715 00:32:43.715 Latency(us) 00:32:43.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.715 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:43.715 nvme0n1 : 2.00 1990.46 248.81 0.00 0.00 8034.03 1474.56 16893.72 00:32:43.715 =================================================================================================================== 00:32:43.715 Total : 1990.46 248.81 0.00 0.00 8034.03 1474.56 16893.72 00:32:43.715 0 00:32:43.715 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:43.715 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:43.715 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:43.715 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:43.715 | .driver_specific 00:32:43.715 | .nvme_error 00:32:43.715 | .status_code 00:32:43.715 | .command_transient_transport_error' 00:32:43.974 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 128 > 0 )) 00:32:43.974 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1612318 00:32:43.974 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1612318 ']' 00:32:43.974 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1612318 00:32:43.974 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:43.974 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:43.974 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1612318 00:32:43.974 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:43.974 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:43.974 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1612318' 00:32:43.974 killing process with pid 1612318 00:32:43.974 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1612318 00:32:43.974 Received shutdown signal, test time was about 2.000000 seconds 00:32:43.974 00:32:43.974 Latency(us) 00:32:43.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.974 =================================================================================================================== 00:32:43.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:43.974 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1612318 00:32:44.231 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:44.231 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:44.231 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:44.231 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:44.231 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:44.231 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1613230 00:32:44.231 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:44.231 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1613230 /var/tmp/bperf.sock 00:32:44.231 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1613230 ']' 00:32:44.231 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:44.232 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:44.232 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:44.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:44.232 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:44.232 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:44.491 [2024-07-26 18:33:10.383091] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:44.491 [2024-07-26 18:33:10.383180] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613230 ] 00:32:44.491 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.491 [2024-07-26 18:33:10.415072] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:44.491 [2024-07-26 18:33:10.446044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.491 [2024-07-26 18:33:10.533907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.750 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:44.750 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:44.750 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:44.750 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:45.007 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:45.007 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.007 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:45.007 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.007 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.007 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.270 nvme0n1 00:32:45.271 18:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:45.271 18:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.271 18:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:45.536 18:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.536 18:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:45.536 18:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:45.536 Running I/O for 2 seconds... 00:32:45.536 [2024-07-26 18:33:11.530451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f81e0 00:32:45.536 [2024-07-26 18:33:11.531312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.536 [2024-07-26 18:33:11.531350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:45.536 [2024-07-26 18:33:11.542381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e0630 00:32:45.536 [2024-07-26 18:33:11.543209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.536 [2024-07-26 18:33:11.543243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:45.536 [2024-07-26 18:33:11.556195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f0350 00:32:45.536 [2024-07-26 18:33:11.557264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.536 [2024-07-26 18:33:11.557296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:45.536 [2024-07-26 18:33:11.568807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f1430 00:32:45.536 [2024-07-26 18:33:11.569969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.536 [2024-07-26 18:33:11.570001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:45.536 [2024-07-26 18:33:11.581195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e4578 00:32:45.536 [2024-07-26 18:33:11.582481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.536 [2024-07-26 18:33:11.582512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:45.536 [2024-07-26 18:33:11.592878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e1b48 00:32:45.536 [2024-07-26 18:33:11.594084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.536 [2024-07-26 18:33:11.594116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:45.536 [2024-07-26 18:33:11.606363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f2510 00:32:45.536 [2024-07-26 18:33:11.607834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.536 [2024-07-26 18:33:11.607864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:45.536 [2024-07-26 18:33:11.618979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e3498 00:32:45.536 [2024-07-26 18:33:11.620636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.536 [2024-07-26 18:33:11.620667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:45.536 [2024-07-26 18:33:11.630895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fda78 00:32:45.536 [2024-07-26 18:33:11.632574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.536 [2024-07-26 18:33:11.632604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:45.537 [2024-07-26 18:33:11.642571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ebfd0 00:32:45.537 [2024-07-26 18:33:11.643617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.537 [2024-07-26 18:33:11.643649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:45.537 [2024-07-26 18:33:11.656609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e4578 00:32:45.537 [2024-07-26 18:33:11.658442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.537 [2024-07-26 18:33:11.658473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:45.537 [2024-07-26 18:33:11.668165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f7970 00:32:45.537 [2024-07-26 18:33:11.669407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.537 [2024-07-26 18:33:11.669438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.680809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fc560 00:32:45.796 [2024-07-26 18:33:11.682156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.682188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.695107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fef90 00:32:45.796 [2024-07-26 18:33:11.697288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.697318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.703854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e95a0 00:32:45.796 [2024-07-26 18:33:11.704683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.704720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.715314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f1868 00:32:45.796 [2024-07-26 18:33:11.716119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.716150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.728831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f20d8 00:32:45.796 [2024-07-26 18:33:11.730033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.730077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.740332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e4140 00:32:45.796 [2024-07-26 18:33:11.741292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.741322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.754023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ea248 00:32:45.796 [2024-07-26 18:33:11.755368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.755399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.766598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f5be8 00:32:45.796 [2024-07-26 18:33:11.768099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.768130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.779411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f4298 00:32:45.796 [2024-07-26 18:33:11.780923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.780972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.792025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fb480 00:32:45.796 [2024-07-26 18:33:11.793531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.793566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.804615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e5220 00:32:45.796 [2024-07-26 18:33:11.806151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.806182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.816317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f3a28 00:32:45.796 [2024-07-26 18:33:11.817792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.817838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.828084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e0ea0 00:32:45.796 [2024-07-26 18:33:11.828928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.828958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.840392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190eaef0 00:32:45.796 [2024-07-26 18:33:11.841338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.841369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.854535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190df550 00:32:45.796 [2024-07-26 18:33:11.856359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.856390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.866211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ec840 00:32:45.796 [2024-07-26 18:33:11.867447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.867478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.878882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fc128 00:32:45.796 [2024-07-26 18:33:11.880177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.880209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.892957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f7970 00:32:45.796 [2024-07-26 18:33:11.894941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.894975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.901532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fcdd0 00:32:45.796 [2024-07-26 18:33:11.902468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.902500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.914144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e4578 00:32:45.796 [2024-07-26 18:33:11.915115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.915145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.926757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e5ec8 00:32:45.796 [2024-07-26 18:33:11.927897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.927928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:45.796 [2024-07-26 18:33:11.938613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f8a50 00:32:45.796 [2024-07-26 18:33:11.939686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.796 [2024-07-26 18:33:11.939716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:11.952590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e4140 00:32:46.056 [2024-07-26 18:33:11.953909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.056 [2024-07-26 18:33:11.953938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:11.965047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fd208 00:32:46.056 [2024-07-26 18:33:11.966376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.056 [2024-07-26 18:33:11.966405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:11.977167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ec408 00:32:46.056 [2024-07-26 18:33:11.978384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.056 [2024-07-26 18:33:11.978442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:11.989765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fc128 00:32:46.056 [2024-07-26 18:33:11.991049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.056 [2024-07-26 18:33:11.991085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:12.002186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190dece0 00:32:46.056 [2024-07-26 18:33:12.003461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.056 [2024-07-26 18:33:12.003490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:12.014707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e27f0 00:32:46.056 [2024-07-26 18:33:12.016023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.056 [2024-07-26 18:33:12.016075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:12.027350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e1b48 00:32:46.056 [2024-07-26 18:33:12.028639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.056 [2024-07-26 18:33:12.028674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:12.039867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f0ff8 00:32:46.056 [2024-07-26 18:33:12.041178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.056 [2024-07-26 18:33:12.041210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:12.052482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fd640 00:32:46.056 [2024-07-26 18:33:12.053788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.056 [2024-07-26 18:33:12.053847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:12.064946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f96f8 00:32:46.056 [2024-07-26 18:33:12.066222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.056 [2024-07-26 18:33:12.066269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:12.077523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e4de8 00:32:46.056 [2024-07-26 18:33:12.078874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.056 [2024-07-26 18:33:12.078906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:12.090237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190df988 00:32:46.056 [2024-07-26 18:33:12.091573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.056 [2024-07-26 18:33:12.091603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:12.102813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e9e10 00:32:46.056 [2024-07-26 18:33:12.104150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.056 [2024-07-26 18:33:12.104189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.056 [2024-07-26 18:33:12.115471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e7c50 00:32:46.057 [2024-07-26 18:33:12.116761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.057 [2024-07-26 18:33:12.116790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.057 [2024-07-26 18:33:12.127987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fa3a0 00:32:46.057 [2024-07-26 18:33:12.129321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.057 [2024-07-26 18:33:12.129352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.057 [2024-07-26 18:33:12.140536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e88f8 00:32:46.057 [2024-07-26 18:33:12.141853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.057 [2024-07-26 18:33:12.141888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.057 [2024-07-26 18:33:12.153263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f0788 00:32:46.057 [2024-07-26 18:33:12.154559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.057 [2024-07-26 18:33:12.154604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.057 [2024-07-26 18:33:12.165661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190eaef0 00:32:46.057 [2024-07-26 18:33:12.167135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.057 [2024-07-26 18:33:12.167164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:46.057 [2024-07-26 18:33:12.178257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190eee38 00:32:46.057 [2024-07-26 18:33:12.179767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.057 [2024-07-26 18:33:12.179796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:46.057 [2024-07-26 18:33:12.190653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f4298 00:32:46.057 [2024-07-26 18:33:12.192140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.057 [2024-07-26 18:33:12.192186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.202311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f9b30 00:32:46.317 [2024-07-26 18:33:12.203747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.203780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.213879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ff3c8 00:32:46.317 [2024-07-26 18:33:12.214823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.214858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.227483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e99d8 00:32:46.317 [2024-07-26 18:33:12.228841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.228870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.238451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fa7d8 00:32:46.317 [2024-07-26 18:33:12.239543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.239577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.250768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190edd58 00:32:46.317 [2024-07-26 18:33:12.251903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.251933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.263540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f7100 00:32:46.317 [2024-07-26 18:33:12.264645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.264675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.276443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f8a50 00:32:46.317 [2024-07-26 18:33:12.277710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.277767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.288822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e8d30 00:32:46.317 [2024-07-26 18:33:12.290151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.290180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.301518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190eb760 00:32:46.317 [2024-07-26 18:33:12.302815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.302860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.314003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e95a0 00:32:46.317 [2024-07-26 18:33:12.315222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.315262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.326564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f57b0 00:32:46.317 [2024-07-26 18:33:12.328032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.328085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.340650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e4140 00:32:46.317 [2024-07-26 18:33:12.342543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.342573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.349244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190df988 00:32:46.317 [2024-07-26 18:33:12.350258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.350293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.362007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e4de8 00:32:46.317 [2024-07-26 18:33:12.362976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.363006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.374585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ef270 00:32:46.317 [2024-07-26 18:33:12.375554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.375584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.386308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190efae0 00:32:46.317 [2024-07-26 18:33:12.387283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.387312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.399997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f20d8 00:32:46.317 [2024-07-26 18:33:12.401191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.317 [2024-07-26 18:33:12.401222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:46.317 [2024-07-26 18:33:12.412639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f7da8 00:32:46.317 [2024-07-26 18:33:12.413742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.318 [2024-07-26 18:33:12.413776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:46.318 [2024-07-26 18:33:12.425140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ed0b0 00:32:46.318 [2024-07-26 18:33:12.426267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.318 [2024-07-26 18:33:12.426318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:46.318 [2024-07-26 18:33:12.437708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190eee38 00:32:46.318 [2024-07-26 18:33:12.438821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.318 [2024-07-26 18:33:12.438850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:46.318 [2024-07-26 18:33:12.450215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f9b30 00:32:46.318 [2024-07-26 18:33:12.451364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.318 [2024-07-26 18:33:12.451395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.462989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f2d80 00:32:46.578 [2024-07-26 18:33:12.464174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.464206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.475581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e5220 00:32:46.578 [2024-07-26 18:33:12.476758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.476788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.487349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fc128 00:32:46.578 [2024-07-26 18:33:12.488465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.488500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.501227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fd640 00:32:46.578 [2024-07-26 18:33:12.502534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.502578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.513840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190de038 00:32:46.578 [2024-07-26 18:33:12.515318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.515347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.526504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f3e60 00:32:46.578 [2024-07-26 18:33:12.527961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.527996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.538974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e3060 00:32:46.578 [2024-07-26 18:33:12.540449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.540483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.551617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f5be8 00:32:46.578 [2024-07-26 18:33:12.553132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.553162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.564611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ef6a8 00:32:46.578 [2024-07-26 18:33:12.566283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.566312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.574908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fe2e8 00:32:46.578 [2024-07-26 18:33:12.575868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.575898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.587419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f7970 00:32:46.578 [2024-07-26 18:33:12.588379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.588410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.599973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f1ca0 00:32:46.578 [2024-07-26 18:33:12.600929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.600959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.612520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e23b8 00:32:46.578 [2024-07-26 18:33:12.613473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.613502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.625159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f5378 00:32:46.578 [2024-07-26 18:33:12.626113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.626144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.639286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f7100 00:32:46.578 [2024-07-26 18:33:12.640915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.578 [2024-07-26 18:33:12.640950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:46.578 [2024-07-26 18:33:12.650752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e9168 00:32:46.579 [2024-07-26 18:33:12.651892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.579 [2024-07-26 18:33:12.651939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:46.579 [2024-07-26 18:33:12.662983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f8a50 00:32:46.579 [2024-07-26 18:33:12.664098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.579 [2024-07-26 18:33:12.664131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:46.579 [2024-07-26 18:33:12.675449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ef6a8 00:32:46.579 [2024-07-26 18:33:12.676591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.579 [2024-07-26 18:33:12.676643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:46.579 [2024-07-26 18:33:12.688443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ff3c8 00:32:46.579 [2024-07-26 18:33:12.689554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.579 [2024-07-26 18:33:12.689586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.579 [2024-07-26 18:33:12.701128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f8618 00:32:46.579 [2024-07-26 18:33:12.702397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.579 [2024-07-26 18:33:12.702427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.579 [2024-07-26 18:33:12.713729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f3e60 00:32:46.579 [2024-07-26 18:33:12.715026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.579 [2024-07-26 18:33:12.715069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.838 [2024-07-26 18:33:12.726324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190de038 00:32:46.838 [2024-07-26 18:33:12.727595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.838 [2024-07-26 18:33:12.727654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.838 [2024-07-26 18:33:12.738823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e5220 00:32:46.838 [2024-07-26 18:33:12.740095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.838 [2024-07-26 18:33:12.740151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.838 [2024-07-26 18:33:12.751125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ea248 00:32:46.838 [2024-07-26 18:33:12.752395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.838 [2024-07-26 18:33:12.752425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.838 [2024-07-26 18:33:12.763633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e4140 00:32:46.838 [2024-07-26 18:33:12.764941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.838 [2024-07-26 18:33:12.764978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.838 [2024-07-26 18:33:12.776282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fd208 00:32:46.838 [2024-07-26 18:33:12.777595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.838 [2024-07-26 18:33:12.777637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.838 [2024-07-26 18:33:12.788777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fda78 00:32:46.838 [2024-07-26 18:33:12.790080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.838 [2024-07-26 18:33:12.790110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.838 [2024-07-26 18:33:12.801421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f1868 00:32:46.838 [2024-07-26 18:33:12.802718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.838 [2024-07-26 18:33:12.802749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:46.838 [2024-07-26 18:33:12.814562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fb048 00:32:46.838 [2024-07-26 18:33:12.815819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.838 [2024-07-26 18:33:12.815850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:46.838 [2024-07-26 18:33:12.827336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fb480 00:32:46.838 [2024-07-26 18:33:12.828814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.838 [2024-07-26 18:33:12.828848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:46.838 [2024-07-26 18:33:12.838272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ee5c8 00:32:46.838 [2024-07-26 18:33:12.839661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.838 [2024-07-26 18:33:12.839691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:46.838 [2024-07-26 18:33:12.849744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190df988 00:32:46.838 [2024-07-26 18:33:12.850720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.838 [2024-07-26 18:33:12.850750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:46.838 [2024-07-26 18:33:12.862208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190de8a8 00:32:46.838 [2024-07-26 18:33:12.863202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.838 [2024-07-26 18:33:12.863232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:46.838 [2024-07-26 18:33:12.874766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e0ea0 00:32:46.838 [2024-07-26 18:33:12.875710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.839 [2024-07-26 18:33:12.875745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:46.839 [2024-07-26 18:33:12.887211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e01f8 00:32:46.839 [2024-07-26 18:33:12.888179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.839 [2024-07-26 18:33:12.888208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:46.839 [2024-07-26 18:33:12.899756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ff3c8 00:32:46.839 [2024-07-26 18:33:12.900718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.839 [2024-07-26 18:33:12.900747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:46.839 [2024-07-26 18:33:12.912340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f2510 00:32:46.839 [2024-07-26 18:33:12.913308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.839 [2024-07-26 18:33:12.913338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:46.839 [2024-07-26 18:33:12.924735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e6300 00:32:46.839 [2024-07-26 18:33:12.925687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.839 [2024-07-26 18:33:12.925717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:46.839 [2024-07-26 18:33:12.937181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e1710 00:32:46.839 [2024-07-26 18:33:12.938154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.839 [2024-07-26 18:33:12.938183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:46.839 [2024-07-26 18:33:12.949638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e2c28 00:32:46.839 [2024-07-26 18:33:12.950620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.839 [2024-07-26 18:33:12.950654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:46.839 [2024-07-26 18:33:12.962319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ec840 00:32:46.839 [2024-07-26 18:33:12.963254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.839 [2024-07-26 18:33:12.963302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:46.839 [2024-07-26 18:33:12.974831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190eaef0 00:32:46.839 [2024-07-26 18:33:12.975837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.839 [2024-07-26 18:33:12.975868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:12.987492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f8e88 00:32:47.098 [2024-07-26 18:33:12.988487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:12.988517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.000150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ef270 00:32:47.098 [2024-07-26 18:33:13.001180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.001221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.012786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e4de8 00:32:47.098 [2024-07-26 18:33:13.013760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.013795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.025178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f6458 00:32:47.098 [2024-07-26 18:33:13.026078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.026109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.036013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ebb98 00:32:47.098 [2024-07-26 18:33:13.036851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.036881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.048983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e9168 00:32:47.098 [2024-07-26 18:33:13.050013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.050044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.060982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e73e0 00:32:47.098 [2024-07-26 18:33:13.062266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.062297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.072307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190eb760 00:32:47.098 [2024-07-26 18:33:13.073479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.073509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.085497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ea680 00:32:47.098 [2024-07-26 18:33:13.086866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.086896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.097688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f6020 00:32:47.098 [2024-07-26 18:33:13.099164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.099195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.107318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fef90 00:32:47.098 [2024-07-26 18:33:13.108210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.108245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.119271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e12d8 00:32:47.098 [2024-07-26 18:33:13.120194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.120226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.132626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f8a50 00:32:47.098 [2024-07-26 18:33:13.134176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.134207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.143463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f0788 00:32:47.098 [2024-07-26 18:33:13.144683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.144715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.155354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f57b0 00:32:47.098 [2024-07-26 18:33:13.156631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.156662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.168738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f1430 00:32:47.098 [2024-07-26 18:33:13.170577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.170607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.176858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f35f0 00:32:47.098 [2024-07-26 18:33:13.177721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.177752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.187924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e4578 00:32:47.098 [2024-07-26 18:33:13.188769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.188800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.200969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190eaab8 00:32:47.098 [2024-07-26 18:33:13.202014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.202045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.213159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e1b48 00:32:47.098 [2024-07-26 18:33:13.214345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.214377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.224102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e4140 00:32:47.098 [2024-07-26 18:33:13.225267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.225297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:47.098 [2024-07-26 18:33:13.237209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fda78 00:32:47.098 [2024-07-26 18:33:13.238598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.098 [2024-07-26 18:33:13.238629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:47.357 [2024-07-26 18:33:13.249542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f3a28 00:32:47.357 [2024-07-26 18:33:13.251034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.357 [2024-07-26 18:33:13.251073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:47.357 [2024-07-26 18:33:13.260566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ef6a8 00:32:47.357 [2024-07-26 18:33:13.261983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.357 [2024-07-26 18:33:13.262014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:47.357 [2024-07-26 18:33:13.271378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e1f80 00:32:47.357 [2024-07-26 18:33:13.272420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.357 [2024-07-26 18:33:13.272450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.283184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190de470 00:32:47.358 [2024-07-26 18:33:13.284262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.284295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.296473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ecc78 00:32:47.358 [2024-07-26 18:33:13.298237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.298268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.307220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190df988 00:32:47.358 [2024-07-26 18:33:13.308604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.308635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.317648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f20d8 00:32:47.358 [2024-07-26 18:33:13.319194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.319226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.328722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ebfd0 00:32:47.358 [2024-07-26 18:33:13.329651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.329681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.340754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fdeb0 00:32:47.358 [2024-07-26 18:33:13.341758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.341789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.351812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e73e0 00:32:47.358 [2024-07-26 18:33:13.352860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.352891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.364944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ec408 00:32:47.358 [2024-07-26 18:33:13.366153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.366201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.376848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f6cc8 00:32:47.358 [2024-07-26 18:33:13.378203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.378232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.387920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fa7d8 00:32:47.358 [2024-07-26 18:33:13.389234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.389263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.398709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190eb760 00:32:47.358 [2024-07-26 18:33:13.399694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.399724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.410388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190df988 00:32:47.358 [2024-07-26 18:33:13.411277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.411314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.422146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f5378 00:32:47.358 [2024-07-26 18:33:13.422989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.423018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.433984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e5220 00:32:47.358 [2024-07-26 18:33:13.434855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.434914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.445854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e4140 00:32:47.358 [2024-07-26 18:33:13.446740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.446771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.457785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190f5be8 00:32:47.358 [2024-07-26 18:33:13.458698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.458729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.469890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e5a90 00:32:47.358 [2024-07-26 18:33:13.470764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.470815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.481883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190e1710 00:32:47.358 [2024-07-26 18:33:13.482933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.482964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.358 [2024-07-26 18:33:13.493749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ec408 00:32:47.358 [2024-07-26 18:33:13.494807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.358 [2024-07-26 18:33:13.494866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.615 [2024-07-26 18:33:13.506024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190fef90 00:32:47.615 [2024-07-26 18:33:13.507029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.615 [2024-07-26 18:33:13.507097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.615 [2024-07-26 18:33:13.517856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2427940) with pdu=0x2000190ef270 00:32:47.615 [2024-07-26 18:33:13.518887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.615 [2024-07-26 18:33:13.518932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.615 00:32:47.615 Latency(us) 00:32:47.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.615 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.615 nvme0n1 : 2.00 20814.53 81.31 0.00 0.00 6139.61 2682.12 14854.83 00:32:47.615 =================================================================================================================== 00:32:47.615 Total : 20814.53 81.31 0.00 0.00 6139.61 2682.12 14854.83 00:32:47.615 0 00:32:47.615 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:47.615 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:47.615 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:47.615 | .driver_specific 00:32:47.615 | .nvme_error 00:32:47.615 | .status_code 00:32:47.615 | .command_transient_transport_error' 00:32:47.615 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:47.873 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:32:47.873 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1613230 00:32:47.873 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1613230 ']' 00:32:47.873 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1613230 00:32:47.873 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:47.873 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:47.873 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1613230 00:32:47.873 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:47.873 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:47.873 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1613230' 00:32:47.873 killing process with pid 1613230 00:32:47.873 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1613230 00:32:47.873 Received shutdown signal, test time was about 2.000000 seconds 00:32:47.873 00:32:47.873 Latency(us) 00:32:47.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.873 =================================================================================================================== 00:32:47.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:47.873 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1613230 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1613635 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1613635 /var/tmp/bperf.sock 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1613635 ']' 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:48.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:48.130 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:48.130 [2024-07-26 18:33:14.088924] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:48.130 [2024-07-26 18:33:14.089009] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613635 ] 00:32:48.130 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:48.130 Zero copy mechanism will not be used. 00:32:48.130 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.130 [2024-07-26 18:33:14.120950] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:48.130 [2024-07-26 18:33:14.148625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.130 [2024-07-26 18:33:14.233302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.388 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:48.388 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:48.388 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:48.388 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:48.646 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:48.646 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.646 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:48.646 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.646 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:48.646 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:48.904 nvme0n1 00:32:48.904 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:48.904 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.904 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:48.904 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.904 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:48.904 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:49.163 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:49.163 Zero copy mechanism will not be used. 00:32:49.163 Running I/O for 2 seconds... 00:32:49.163 [2024-07-26 18:33:15.126701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.163 [2024-07-26 18:33:15.127667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.163 [2024-07-26 18:33:15.127712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.163 [2024-07-26 18:33:15.140808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.163 [2024-07-26 18:33:15.142051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.163 [2024-07-26 18:33:15.142109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.163 [2024-07-26 18:33:15.155879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.163 [2024-07-26 18:33:15.158127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.163 [2024-07-26 18:33:15.158157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.163 [2024-07-26 18:33:15.170633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.163 [2024-07-26 18:33:15.172987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.163 [2024-07-26 18:33:15.173022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.163 [2024-07-26 18:33:15.187342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.163 [2024-07-26 18:33:15.189907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.163 [2024-07-26 18:33:15.189943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.163 [2024-07-26 18:33:15.202983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.163 [2024-07-26 18:33:15.204919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.163 [2024-07-26 18:33:15.204955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.163 [2024-07-26 18:33:15.218241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.163 [2024-07-26 18:33:15.219442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.163 [2024-07-26 18:33:15.220805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.163 [2024-07-26 18:33:15.232913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.163 [2024-07-26 18:33:15.235777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.164 [2024-07-26 18:33:15.235812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.164 [2024-07-26 18:33:15.250126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.164 [2024-07-26 18:33:15.251881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.164 [2024-07-26 18:33:15.253410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.164 [2024-07-26 18:33:15.266984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.164 [2024-07-26 18:33:15.268372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.164 [2024-07-26 18:33:15.268406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.164 [2024-07-26 18:33:15.282614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.164 [2024-07-26 18:33:15.285043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.164 [2024-07-26 18:33:15.285098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.164 [2024-07-26 18:33:15.297600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.164 [2024-07-26 18:33:15.300494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.164 [2024-07-26 18:33:15.300526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.312948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.314941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.314976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.327940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.330094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.330131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.341561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.343948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.343980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.357138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.358879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.358911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.372928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.374275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.374307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.387767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.390301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.390347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.402670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.403689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.403720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.417158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.420525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.420556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.433137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.434605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.434651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.449028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.451289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.451321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.464151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.466400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.466432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.478538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.479924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.479958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.492631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.494695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.494754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.507667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.509428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.509475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.524053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.525771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.527015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.539908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.542455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.542485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.424 [2024-07-26 18:33:15.555235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.424 [2024-07-26 18:33:15.556514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.424 [2024-07-26 18:33:15.556545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.685 [2024-07-26 18:33:15.570035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.685 [2024-07-26 18:33:15.570567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.685 [2024-07-26 18:33:15.570598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.685 [2024-07-26 18:33:15.585569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.685 [2024-07-26 18:33:15.585777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.685 [2024-07-26 18:33:15.586726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.685 [2024-07-26 18:33:15.599878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.685 [2024-07-26 18:33:15.602294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.685 [2024-07-26 18:33:15.602325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.685 [2024-07-26 18:33:15.614196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.685 [2024-07-26 18:33:15.616656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.685 [2024-07-26 18:33:15.616688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.685 [2024-07-26 18:33:15.628683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.685 [2024-07-26 18:33:15.629888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.685 [2024-07-26 18:33:15.630778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.685 [2024-07-26 18:33:15.643682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.685 [2024-07-26 18:33:15.645703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.685 [2024-07-26 18:33:15.645744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.685 [2024-07-26 18:33:15.658888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.685 [2024-07-26 18:33:15.659474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.685 [2024-07-26 18:33:15.659519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.685 [2024-07-26 18:33:15.675195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.685 [2024-07-26 18:33:15.675740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.685 [2024-07-26 18:33:15.676574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.685 [2024-07-26 18:33:15.688886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.685 [2024-07-26 18:33:15.691644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.685 [2024-07-26 18:33:15.691689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.685 [2024-07-26 18:33:15.704321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.685 [2024-07-26 18:33:15.706680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.685 [2024-07-26 18:33:15.706710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.685 [2024-07-26 18:33:15.718780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.685 [2024-07-26 18:33:15.721200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.685 [2024-07-26 18:33:15.721230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.685 [2024-07-26 18:33:15.733457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.685 [2024-07-26 18:33:15.735488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.685 [2024-07-26 18:33:15.735517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.686 [2024-07-26 18:33:15.748584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.686 [2024-07-26 18:33:15.751241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.686 [2024-07-26 18:33:15.751273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.686 [2024-07-26 18:33:15.764593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.686 [2024-07-26 18:33:15.767244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.686 [2024-07-26 18:33:15.767276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.686 [2024-07-26 18:33:15.779912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.686 [2024-07-26 18:33:15.781696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.686 [2024-07-26 18:33:15.781726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.686 [2024-07-26 18:33:15.795349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.686 [2024-07-26 18:33:15.796725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.686 [2024-07-26 18:33:15.797842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.686 [2024-07-26 18:33:15.810367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.686 [2024-07-26 18:33:15.812607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.686 [2024-07-26 18:33:15.812639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.686 [2024-07-26 18:33:15.825774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.686 [2024-07-26 18:33:15.828075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.686 [2024-07-26 18:33:15.828107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:15.840655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:15.842813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:15.842845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:15.855593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:15.857146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:15.857192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:15.868649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:15.870275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:15.870306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:15.882298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:15.884578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:15.884623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:15.896764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:15.897697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:15.898727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:15.912056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:15.913872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:15.913901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:15.926387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:15.928323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:15.928370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:15.942149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:15.944893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:15.944923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:15.956166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:15.958950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:15.958980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:15.971273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:15.972541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:15.972998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:15.985068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:15.986337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:15.986976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:15.999106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:16.001884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:16.001915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:16.013656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:16.015748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:16.015777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:16.029095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:16.030765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:16.032248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:16.043556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:16.044686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:16.044716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:16.057233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:16.057935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:16.057966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:16.071662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:16.074389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:16.074434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.947 [2024-07-26 18:33:16.087004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:49.947 [2024-07-26 18:33:16.089805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.947 [2024-07-26 18:33:16.089837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.208 [2024-07-26 18:33:16.102219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.208 [2024-07-26 18:33:16.103411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.208 [2024-07-26 18:33:16.104697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.208 [2024-07-26 18:33:16.117883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.208 [2024-07-26 18:33:16.119401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.208 [2024-07-26 18:33:16.119446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.208 [2024-07-26 18:33:16.131906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.208 [2024-07-26 18:33:16.133824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.208 [2024-07-26 18:33:16.133853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.208 [2024-07-26 18:33:16.147642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.208 [2024-07-26 18:33:16.150459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.208 [2024-07-26 18:33:16.150489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.208 [2024-07-26 18:33:16.162999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.208 [2024-07-26 18:33:16.165510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.208 [2024-07-26 18:33:16.165540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.209 [2024-07-26 18:33:16.177456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.209 [2024-07-26 18:33:16.179261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.209 [2024-07-26 18:33:16.179293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.209 [2024-07-26 18:33:16.191335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.209 [2024-07-26 18:33:16.192890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.209 [2024-07-26 18:33:16.192920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.209 [2024-07-26 18:33:16.206126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.209 [2024-07-26 18:33:16.207272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.209 [2024-07-26 18:33:16.207363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.209 [2024-07-26 18:33:16.220699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.209 [2024-07-26 18:33:16.222900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.209 [2024-07-26 18:33:16.222931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.209 [2024-07-26 18:33:16.236293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.209 [2024-07-26 18:33:16.238298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.209 [2024-07-26 18:33:16.238330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.209 [2024-07-26 18:33:16.249981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.209 [2024-07-26 18:33:16.252131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.209 [2024-07-26 18:33:16.252161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.209 [2024-07-26 18:33:16.265438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.209 [2024-07-26 18:33:16.267149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.209 [2024-07-26 18:33:16.268487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.209 [2024-07-26 18:33:16.279342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.209 [2024-07-26 18:33:16.281386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.209 [2024-07-26 18:33:16.281417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.209 [2024-07-26 18:33:16.294398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.209 [2024-07-26 18:33:16.295776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.209 [2024-07-26 18:33:16.296853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.209 [2024-07-26 18:33:16.308170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.209 [2024-07-26 18:33:16.310722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.209 [2024-07-26 18:33:16.310752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.209 [2024-07-26 18:33:16.324269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.209 [2024-07-26 18:33:16.326203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.209 [2024-07-26 18:33:16.326235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.209 [2024-07-26 18:33:16.338578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.209 [2024-07-26 18:33:16.341165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.209 [2024-07-26 18:33:16.341196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.468 [2024-07-26 18:33:16.354253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.468 [2024-07-26 18:33:16.356159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.468 [2024-07-26 18:33:16.356191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.468 [2024-07-26 18:33:16.369440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.468 [2024-07-26 18:33:16.371184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.468 [2024-07-26 18:33:16.371217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.468 [2024-07-26 18:33:16.384684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.468 [2024-07-26 18:33:16.387498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.468 [2024-07-26 18:33:16.387527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.468 [2024-07-26 18:33:16.398848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.468 [2024-07-26 18:33:16.401724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.468 [2024-07-26 18:33:16.401754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.468 [2024-07-26 18:33:16.414209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.468 [2024-07-26 18:33:16.417204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.468 [2024-07-26 18:33:16.417235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.468 [2024-07-26 18:33:16.429387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.468 [2024-07-26 18:33:16.430878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.468 [2024-07-26 18:33:16.430909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.468 [2024-07-26 18:33:16.444706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.468 [2024-07-26 18:33:16.446326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.468 [2024-07-26 18:33:16.447442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.468 [2024-07-26 18:33:16.458687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.468 [2024-07-26 18:33:16.460017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.468 [2024-07-26 18:33:16.461267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.468 [2024-07-26 18:33:16.473529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.468 [2024-07-26 18:33:16.476401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.468 [2024-07-26 18:33:16.476432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.468 [2024-07-26 18:33:16.489372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.468 [2024-07-26 18:33:16.491653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.468 [2024-07-26 18:33:16.491688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.468 [2024-07-26 18:33:16.504833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.468 [2024-07-26 18:33:16.506769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.468 [2024-07-26 18:33:16.506804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.468 [2024-07-26 18:33:16.520225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.468 [2024-07-26 18:33:16.522151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.469 [2024-07-26 18:33:16.522181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.469 [2024-07-26 18:33:16.536339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.469 [2024-07-26 18:33:16.539761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.469 [2024-07-26 18:33:16.539796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.469 [2024-07-26 18:33:16.552588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.469 [2024-07-26 18:33:16.553993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.469 [2024-07-26 18:33:16.554029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.469 [2024-07-26 18:33:16.568760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.469 [2024-07-26 18:33:16.571531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.469 [2024-07-26 18:33:16.571566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.469 [2024-07-26 18:33:16.585752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.469 [2024-07-26 18:33:16.588441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.469 [2024-07-26 18:33:16.588476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.469 [2024-07-26 18:33:16.601275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.469 [2024-07-26 18:33:16.602576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.469 [2024-07-26 18:33:16.602610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.754 [2024-07-26 18:33:16.616463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.754 [2024-07-26 18:33:16.617782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.754 [2024-07-26 18:33:16.617914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.754 [2024-07-26 18:33:16.631205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.754 [2024-07-26 18:33:16.632306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.754 [2024-07-26 18:33:16.632337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.754 [2024-07-26 18:33:16.646393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.648312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.648359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.659890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.660776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.660822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.676310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.678469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.678504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.692056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.694355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.694386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.707553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.710152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.710197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.722262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.724500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.724534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.736788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.738994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.739028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.750575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.752819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.752854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.765783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.767772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.767807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.780126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.782022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.782057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.795882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.797814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.797849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.811184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.812438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.812473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.826171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.828848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.828884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.842056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.843996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.844670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.857753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.860150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.860194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.755 [2024-07-26 18:33:16.873159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:50.755 [2024-07-26 18:33:16.874654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.755 [2024-07-26 18:33:16.875530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.014 [2024-07-26 18:33:16.888082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.014 [2024-07-26 18:33:16.890006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.014 [2024-07-26 18:33:16.890041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.014 [2024-07-26 18:33:16.903637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.014 [2024-07-26 18:33:16.904401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.014 [2024-07-26 18:33:16.904528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.014 [2024-07-26 18:33:16.919179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.014 [2024-07-26 18:33:16.921560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.014 [2024-07-26 18:33:16.921596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.014 [2024-07-26 18:33:16.935237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.014 [2024-07-26 18:33:16.935677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.014 [2024-07-26 18:33:16.935713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.014 [2024-07-26 18:33:16.951628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.014 [2024-07-26 18:33:16.954079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.014 [2024-07-26 18:33:16.954124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.014 [2024-07-26 18:33:16.969075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.014 [2024-07-26 18:33:16.970899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.014 [2024-07-26 18:33:16.971016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.014 [2024-07-26 18:33:16.983739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.014 [2024-07-26 18:33:16.984816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.014 [2024-07-26 18:33:16.984852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.014 [2024-07-26 18:33:16.998469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.014 [2024-07-26 18:33:16.999272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.014 [2024-07-26 18:33:17.000225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.014 [2024-07-26 18:33:17.014143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.014 [2024-07-26 18:33:17.015383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.015 [2024-07-26 18:33:17.017117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.015 [2024-07-26 18:33:17.029759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.015 [2024-07-26 18:33:17.032410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.015 [2024-07-26 18:33:17.032445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.015 [2024-07-26 18:33:17.046934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.015 [2024-07-26 18:33:17.049232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.015 [2024-07-26 18:33:17.049277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.015 [2024-07-26 18:33:17.062617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.015 [2024-07-26 18:33:17.065055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.015 [2024-07-26 18:33:17.065119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.015 [2024-07-26 18:33:17.079108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.015 [2024-07-26 18:33:17.080740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.015 [2024-07-26 18:33:17.080775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.015 [2024-07-26 18:33:17.093932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.015 [2024-07-26 18:33:17.095979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.015 [2024-07-26 18:33:17.096014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.015 [2024-07-26 18:33:17.109508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24295c0) with pdu=0x2000190fef90 00:32:51.015 [2024-07-26 18:33:17.110828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.015 [2024-07-26 18:33:17.112139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.015 00:32:51.015 Latency(us) 00:32:51.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.015 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:51.015 nvme0n1 : 2.01 2042.30 255.29 0.00 0.00 7772.01 4854.52 19418.07 00:32:51.015 =================================================================================================================== 00:32:51.015 Total : 2042.30 255.29 0.00 0.00 7772.01 4854.52 19418.07 00:32:51.015 0 00:32:51.015 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:51.015 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:51.015 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:51.015 | .driver_specific 00:32:51.015 | .nvme_error 00:32:51.015 | .status_code 00:32:51.015 | .command_transient_transport_error' 00:32:51.015 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:51.274 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 132 > 0 )) 00:32:51.274 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1613635 00:32:51.274 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1613635 ']' 00:32:51.274 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1613635 00:32:51.274 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:51.275 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:51.275 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1613635 00:32:51.275 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:51.275 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:51.275 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1613635' 00:32:51.275 killing process with pid 1613635 00:32:51.275 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1613635 00:32:51.275 Received shutdown signal, test time was about 2.000000 seconds 00:32:51.275 00:32:51.275 Latency(us) 00:32:51.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.275 =================================================================================================================== 00:32:51.275 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:51.275 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1613635 00:32:51.533 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1611765 00:32:51.533 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1611765 ']' 00:32:51.533 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1611765 00:32:51.533 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:51.533 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:51.533 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1611765 00:32:51.533 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:51.533 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:51.533 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1611765' 00:32:51.533 killing process with pid 1611765 00:32:51.533 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1611765 00:32:51.533 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1611765 00:32:51.792 00:32:51.792 real 0m15.484s 00:32:51.792 user 0m27.236s 00:32:51.792 sys 0m4.153s 00:32:51.792 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:51.792 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:51.792 ************************************ 00:32:51.792 END TEST nvmf_digest_error 00:32:51.792 ************************************ 00:32:51.792 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:51.792 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:51.792 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:51.792 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:32:51.792 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:51.792 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:32:51.792 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:51.792 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:51.792 rmmod nvme_tcp 00:32:52.050 rmmod nvme_fabrics 00:32:52.050 rmmod nvme_keyring 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1611765 ']' 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1611765 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1611765 ']' 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1611765 00:32:52.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1611765) - No such process 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1611765 is not found' 00:32:52.050 Process with pid 1611765 is not found 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.050 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.955 18:33:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:53.955 00:32:53.955 real 0m35.144s 00:32:53.955 user 0m58.613s 00:32:53.955 sys 0m9.753s 00:32:53.955 18:33:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:53.955 18:33:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:53.955 ************************************ 00:32:53.955 END TEST nvmf_digest 00:32:53.955 ************************************ 00:32:53.955 18:33:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:53.955 18:33:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:53.955 18:33:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:53.955 18:33:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:53.955 18:33:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:53.955 18:33:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:53.955 18:33:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.955 ************************************ 00:32:53.955 START TEST nvmf_bdevperf 00:32:53.955 ************************************ 00:32:53.955 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:54.214 * Looking for test storage... 00:32:54.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:32:54.214 18:33:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:56.118 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:56.118 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:56.118 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.118 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:56.119 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:56.119 18:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:56.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:56.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:32:56.119 00:32:56.119 --- 10.0.0.2 ping statistics --- 00:32:56.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.119 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:56.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:56.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:32:56.119 00:32:56.119 --- 10.0.0.1 ping statistics --- 00:32:56.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.119 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1615981 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1615981 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1615981 ']' 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:56.119 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:56.119 [2024-07-26 18:33:22.203299] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:56.119 [2024-07-26 18:33:22.203396] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:56.119 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.119 [2024-07-26 18:33:22.242916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:56.378 [2024-07-26 18:33:22.272153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:56.378 [2024-07-26 18:33:22.360126] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:56.378 [2024-07-26 18:33:22.360183] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:56.378 [2024-07-26 18:33:22.360205] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:56.378 [2024-07-26 18:33:22.360224] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:56.378 [2024-07-26 18:33:22.360241] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:56.378 [2024-07-26 18:33:22.360300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:56.378 [2024-07-26 18:33:22.360450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:56.378 [2024-07-26 18:33:22.360458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.378 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:56.378 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:32:56.378 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:56.378 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:56.378 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:56.378 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:56.378 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:56.378 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.378 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:56.379 [2024-07-26 18:33:22.504878] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:56.638 Malloc0 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:56.638 [2024-07-26 18:33:22.571410] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:56.638 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:56.638 { 00:32:56.638 "params": { 00:32:56.638 "name": "Nvme$subsystem", 00:32:56.638 "trtype": "$TEST_TRANSPORT", 00:32:56.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:56.638 "adrfam": "ipv4", 00:32:56.638 "trsvcid": "$NVMF_PORT", 00:32:56.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:56.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:56.639 "hdgst": ${hdgst:-false}, 00:32:56.639 "ddgst": ${ddgst:-false} 00:32:56.639 }, 00:32:56.639 "method": "bdev_nvme_attach_controller" 00:32:56.639 } 00:32:56.639 EOF 00:32:56.639 )") 00:32:56.639 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:56.639 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:56.639 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:56.639 18:33:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:56.639 "params": { 00:32:56.639 "name": "Nvme1", 00:32:56.639 "trtype": "tcp", 00:32:56.639 "traddr": "10.0.0.2", 00:32:56.639 "adrfam": "ipv4", 00:32:56.639 "trsvcid": "4420", 00:32:56.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:56.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:56.639 "hdgst": false, 00:32:56.639 "ddgst": false 00:32:56.639 }, 00:32:56.639 "method": "bdev_nvme_attach_controller" 00:32:56.639 }' 00:32:56.639 [2024-07-26 18:33:22.619436] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:56.639 [2024-07-26 18:33:22.619511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616126 ] 00:32:56.639 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.639 [2024-07-26 18:33:22.652835] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:56.639 [2024-07-26 18:33:22.682677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.639 [2024-07-26 18:33:22.774238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.207 Running I/O for 1 seconds... 00:32:58.145 00:32:58.145 Latency(us) 00:32:58.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.145 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:58.145 Verification LBA range: start 0x0 length 0x4000 00:32:58.145 Nvme1n1 : 1.01 8862.40 34.62 0.00 0.00 14380.04 2791.35 14660.65 00:32:58.145 =================================================================================================================== 00:32:58.145 Total : 8862.40 34.62 0.00 0.00 14380.04 2791.35 14660.65 00:32:58.404 18:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1616274 00:32:58.404 18:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:58.404 18:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:58.404 18:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:58.404 18:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:58.404 18:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:58.404 18:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:58.404 18:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:58.404 { 00:32:58.404 "params": { 00:32:58.404 "name": "Nvme$subsystem", 00:32:58.404 "trtype": "$TEST_TRANSPORT", 00:32:58.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.404 "adrfam": "ipv4", 00:32:58.404 "trsvcid": "$NVMF_PORT", 00:32:58.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.404 "hdgst": ${hdgst:-false}, 00:32:58.404 "ddgst": ${ddgst:-false} 00:32:58.404 }, 00:32:58.404 "method": "bdev_nvme_attach_controller" 00:32:58.404 } 00:32:58.404 EOF 00:32:58.404 )") 00:32:58.404 18:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:58.404 18:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:58.404 18:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:58.404 18:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:58.404 "params": { 00:32:58.404 "name": "Nvme1", 00:32:58.404 "trtype": "tcp", 00:32:58.404 "traddr": "10.0.0.2", 00:32:58.404 "adrfam": "ipv4", 00:32:58.404 "trsvcid": "4420", 00:32:58.404 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.404 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:58.404 "hdgst": false, 00:32:58.404 "ddgst": false 00:32:58.404 }, 00:32:58.404 "method": "bdev_nvme_attach_controller" 00:32:58.404 }' 00:32:58.404 [2024-07-26 18:33:24.396756] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:58.404 [2024-07-26 18:33:24.396831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616274 ] 00:32:58.404 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.404 [2024-07-26 18:33:24.429256] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:58.405 [2024-07-26 18:33:24.457240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.405 [2024-07-26 18:33:24.541018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.662 Running I/O for 15 seconds... 00:33:01.954 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1615981 00:33:01.954 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:01.954 [2024-07-26 18:33:27.367064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.954 [2024-07-26 18:33:27.367113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.954 [2024-07-26 18:33:27.367171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.954 [2024-07-26 18:33:27.367206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.954 [2024-07-26 18:33:27.367235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.954 [2024-07-26 18:33:27.367266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.954 [2024-07-26 18:33:27.367298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.954 [2024-07-26 18:33:27.367329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.954 [2024-07-26 18:33:27.367374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.954 [2024-07-26 18:33:27.367420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.954 [2024-07-26 18:33:27.367454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.954 [2024-07-26 18:33:27.367489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.954 [2024-07-26 18:33:27.367523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.954 [2024-07-26 18:33:27.367557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.954 [2024-07-26 18:33:27.367591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.954 [2024-07-26 18:33:27.367614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.367631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.367649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.367664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.367680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.367695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.367713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.367728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.367745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.367760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.367777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.367792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.367809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.367824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.367840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.367856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.367873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.367888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.367905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.367920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.367937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.367951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.367968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.367983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.955 [2024-07-26 18:33:27.368537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.955 [2024-07-26 18:33:27.368568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.955 [2024-07-26 18:33:27.368600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.955 [2024-07-26 18:33:27.368631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.955 [2024-07-26 18:33:27.368662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.955 [2024-07-26 18:33:27.368694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.955 [2024-07-26 18:33:27.368725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.955 [2024-07-26 18:33:27.368757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.955 [2024-07-26 18:33:27.368790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.955 [2024-07-26 18:33:27.368822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.955 [2024-07-26 18:33:27.368853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.955 [2024-07-26 18:33:27.368890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.955 [2024-07-26 18:33:27.368907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.368922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.368939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.368954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.368972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.368986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.369983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.369998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.370015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.370030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.370057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.370083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.370101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.370131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.370146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.370159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.370178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.370192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.370207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.956 [2024-07-26 18:33:27.370220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.956 [2024-07-26 18:33:27.370235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.957 [2024-07-26 18:33:27.370427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.957 [2024-07-26 18:33:27.370461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.957 [2024-07-26 18:33:27.370494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.957 [2024-07-26 18:33:27.370526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.957 [2024-07-26 18:33:27.370558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.957 [2024-07-26 18:33:27.370594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.957 [2024-07-26 18:33:27.370628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.370967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.370983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.371019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.371054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.371127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.371158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.371187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.371215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.371244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.371273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.371300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.371328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.371375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.957 [2024-07-26 18:33:27.371400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba8e60 is same with the state(5) to be set 00:33:01.957 [2024-07-26 18:33:27.371452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:01.957 [2024-07-26 18:33:27.371466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:01.957 [2024-07-26 18:33:27.371479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53560 len:8 PRP1 0x0 PRP2 0x0 00:33:01.957 [2024-07-26 18:33:27.371494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.957 [2024-07-26 18:33:27.371557] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xba8e60 was disconnected and freed. reset controller. 00:33:01.957 [2024-07-26 18:33:27.375396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.957 [2024-07-26 18:33:27.375491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.957 [2024-07-26 18:33:27.376209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.957 [2024-07-26 18:33:27.376239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.957 [2024-07-26 18:33:27.376256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.958 [2024-07-26 18:33:27.376504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.958 [2024-07-26 18:33:27.376748] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.958 [2024-07-26 18:33:27.376772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.958 [2024-07-26 18:33:27.376789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.958 [2024-07-26 18:33:27.380352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.958 [2024-07-26 18:33:27.389677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.958 [2024-07-26 18:33:27.390135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.958 [2024-07-26 18:33:27.390167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.958 [2024-07-26 18:33:27.390185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.958 [2024-07-26 18:33:27.390425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.958 [2024-07-26 18:33:27.390668] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.958 [2024-07-26 18:33:27.390692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.958 [2024-07-26 18:33:27.390708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.958 [2024-07-26 18:33:27.394299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.958 [2024-07-26 18:33:27.403604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.958 [2024-07-26 18:33:27.404032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.958 [2024-07-26 18:33:27.404070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.958 [2024-07-26 18:33:27.404090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.958 [2024-07-26 18:33:27.404331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.958 [2024-07-26 18:33:27.404574] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.958 [2024-07-26 18:33:27.404603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.958 [2024-07-26 18:33:27.404620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.958 [2024-07-26 18:33:27.408215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.958 [2024-07-26 18:33:27.417518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.958 [2024-07-26 18:33:27.417953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.958 [2024-07-26 18:33:27.417983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.958 [2024-07-26 18:33:27.418001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.958 [2024-07-26 18:33:27.418253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.958 [2024-07-26 18:33:27.418498] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.958 [2024-07-26 18:33:27.418522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.958 [2024-07-26 18:33:27.418537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.958 [2024-07-26 18:33:27.422120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.958 [2024-07-26 18:33:27.431411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.958 [2024-07-26 18:33:27.431837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.958 [2024-07-26 18:33:27.431869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.958 [2024-07-26 18:33:27.431887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.958 [2024-07-26 18:33:27.432136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.958 [2024-07-26 18:33:27.432381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.958 [2024-07-26 18:33:27.432405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.958 [2024-07-26 18:33:27.432421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.958 [2024-07-26 18:33:27.435998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.958 [2024-07-26 18:33:27.445308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.958 [2024-07-26 18:33:27.445759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.958 [2024-07-26 18:33:27.445790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.958 [2024-07-26 18:33:27.445808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.958 [2024-07-26 18:33:27.446047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.958 [2024-07-26 18:33:27.446302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.958 [2024-07-26 18:33:27.446326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.958 [2024-07-26 18:33:27.446342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.958 [2024-07-26 18:33:27.449922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.958 [2024-07-26 18:33:27.459246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.958 [2024-07-26 18:33:27.459698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.958 [2024-07-26 18:33:27.459729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.958 [2024-07-26 18:33:27.459747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.958 [2024-07-26 18:33:27.459985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.958 [2024-07-26 18:33:27.460241] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.958 [2024-07-26 18:33:27.460265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.958 [2024-07-26 18:33:27.460280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.958 [2024-07-26 18:33:27.463857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.958 [2024-07-26 18:33:27.473156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.958 [2024-07-26 18:33:27.473599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.958 [2024-07-26 18:33:27.473629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.958 [2024-07-26 18:33:27.473647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.958 [2024-07-26 18:33:27.473887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.958 [2024-07-26 18:33:27.474142] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.958 [2024-07-26 18:33:27.474166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.958 [2024-07-26 18:33:27.474182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.958 [2024-07-26 18:33:27.477760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.958 [2024-07-26 18:33:27.487057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.958 [2024-07-26 18:33:27.487519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.958 [2024-07-26 18:33:27.487549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.958 [2024-07-26 18:33:27.487567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.958 [2024-07-26 18:33:27.487806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.958 [2024-07-26 18:33:27.488050] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.958 [2024-07-26 18:33:27.488084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.958 [2024-07-26 18:33:27.488100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.958 [2024-07-26 18:33:27.491679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.958 [2024-07-26 18:33:27.500985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.958 [2024-07-26 18:33:27.501413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.958 [2024-07-26 18:33:27.501444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.958 [2024-07-26 18:33:27.501461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.958 [2024-07-26 18:33:27.501707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.958 [2024-07-26 18:33:27.501951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.958 [2024-07-26 18:33:27.501975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.958 [2024-07-26 18:33:27.501990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.958 [2024-07-26 18:33:27.505582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.958 [2024-07-26 18:33:27.514897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.958 [2024-07-26 18:33:27.515324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.958 [2024-07-26 18:33:27.515356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.959 [2024-07-26 18:33:27.515373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.959 [2024-07-26 18:33:27.515613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.959 [2024-07-26 18:33:27.515856] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.959 [2024-07-26 18:33:27.515880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.959 [2024-07-26 18:33:27.515896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.959 [2024-07-26 18:33:27.519484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.959 [2024-07-26 18:33:27.528795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.959 [2024-07-26 18:33:27.529254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.959 [2024-07-26 18:33:27.529282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.959 [2024-07-26 18:33:27.529298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.959 [2024-07-26 18:33:27.529552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.959 [2024-07-26 18:33:27.529796] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.959 [2024-07-26 18:33:27.529820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.959 [2024-07-26 18:33:27.529835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.959 [2024-07-26 18:33:27.533427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.959 [2024-07-26 18:33:27.542737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.959 [2024-07-26 18:33:27.543201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.959 [2024-07-26 18:33:27.543233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.959 [2024-07-26 18:33:27.543250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.959 [2024-07-26 18:33:27.543490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.959 [2024-07-26 18:33:27.543734] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.959 [2024-07-26 18:33:27.543757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.959 [2024-07-26 18:33:27.543778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.959 [2024-07-26 18:33:27.547367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.959 [2024-07-26 18:33:27.556685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.959 [2024-07-26 18:33:27.557132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.959 [2024-07-26 18:33:27.557164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.959 [2024-07-26 18:33:27.557182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.959 [2024-07-26 18:33:27.557421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.959 [2024-07-26 18:33:27.557666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.959 [2024-07-26 18:33:27.557689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.959 [2024-07-26 18:33:27.557705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.959 [2024-07-26 18:33:27.561310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.959 [2024-07-26 18:33:27.570613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.959 [2024-07-26 18:33:27.571077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.959 [2024-07-26 18:33:27.571118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.959 [2024-07-26 18:33:27.571134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.959 [2024-07-26 18:33:27.571387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.959 [2024-07-26 18:33:27.571631] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.959 [2024-07-26 18:33:27.571655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.959 [2024-07-26 18:33:27.571670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.959 [2024-07-26 18:33:27.575258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.959 [2024-07-26 18:33:27.584559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.959 [2024-07-26 18:33:27.584995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.959 [2024-07-26 18:33:27.585027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.959 [2024-07-26 18:33:27.585045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.959 [2024-07-26 18:33:27.585295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.959 [2024-07-26 18:33:27.585540] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.959 [2024-07-26 18:33:27.585564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.959 [2024-07-26 18:33:27.585579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.959 [2024-07-26 18:33:27.589169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.959 [2024-07-26 18:33:27.598472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.959 [2024-07-26 18:33:27.598909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.959 [2024-07-26 18:33:27.598940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.959 [2024-07-26 18:33:27.598956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.959 [2024-07-26 18:33:27.599217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.959 [2024-07-26 18:33:27.599462] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.959 [2024-07-26 18:33:27.599486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.959 [2024-07-26 18:33:27.599501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.959 [2024-07-26 18:33:27.603089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.959 [2024-07-26 18:33:27.612399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.959 [2024-07-26 18:33:27.612846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.959 [2024-07-26 18:33:27.612877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.959 [2024-07-26 18:33:27.612895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.959 [2024-07-26 18:33:27.613147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.959 [2024-07-26 18:33:27.613392] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.959 [2024-07-26 18:33:27.613416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.959 [2024-07-26 18:33:27.613431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.959 [2024-07-26 18:33:27.617010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.959 [2024-07-26 18:33:27.626318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.959 [2024-07-26 18:33:27.626760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.959 [2024-07-26 18:33:27.626791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.959 [2024-07-26 18:33:27.626808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.960 [2024-07-26 18:33:27.627047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.960 [2024-07-26 18:33:27.627302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.960 [2024-07-26 18:33:27.627326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.960 [2024-07-26 18:33:27.627341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.960 [2024-07-26 18:33:27.630920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.960 [2024-07-26 18:33:27.640250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.960 [2024-07-26 18:33:27.640675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.960 [2024-07-26 18:33:27.640705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.960 [2024-07-26 18:33:27.640722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.960 [2024-07-26 18:33:27.640961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.960 [2024-07-26 18:33:27.641220] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.960 [2024-07-26 18:33:27.641244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.960 [2024-07-26 18:33:27.641260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.960 [2024-07-26 18:33:27.644849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.960 [2024-07-26 18:33:27.654152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.960 [2024-07-26 18:33:27.654583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.960 [2024-07-26 18:33:27.654611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.960 [2024-07-26 18:33:27.654626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.960 [2024-07-26 18:33:27.654870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.960 [2024-07-26 18:33:27.655125] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.960 [2024-07-26 18:33:27.655149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.960 [2024-07-26 18:33:27.655165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.960 [2024-07-26 18:33:27.658756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.960 [2024-07-26 18:33:27.668065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.960 [2024-07-26 18:33:27.668489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.960 [2024-07-26 18:33:27.668520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.960 [2024-07-26 18:33:27.668538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.960 [2024-07-26 18:33:27.668777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.960 [2024-07-26 18:33:27.669020] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.960 [2024-07-26 18:33:27.669044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.960 [2024-07-26 18:33:27.669069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.960 [2024-07-26 18:33:27.672652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.960 [2024-07-26 18:33:27.681950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.960 [2024-07-26 18:33:27.682403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.960 [2024-07-26 18:33:27.682434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.960 [2024-07-26 18:33:27.682451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.960 [2024-07-26 18:33:27.682690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.960 [2024-07-26 18:33:27.682934] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.960 [2024-07-26 18:33:27.682958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.960 [2024-07-26 18:33:27.682973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.960 [2024-07-26 18:33:27.686566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.960 [2024-07-26 18:33:27.695890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.960 [2024-07-26 18:33:27.696354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.960 [2024-07-26 18:33:27.696385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.960 [2024-07-26 18:33:27.696402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.960 [2024-07-26 18:33:27.696641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.960 [2024-07-26 18:33:27.696885] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.960 [2024-07-26 18:33:27.696908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.960 [2024-07-26 18:33:27.696924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.960 [2024-07-26 18:33:27.700515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.960 [2024-07-26 18:33:27.709822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.960 [2024-07-26 18:33:27.710281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.960 [2024-07-26 18:33:27.710312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.960 [2024-07-26 18:33:27.710331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.960 [2024-07-26 18:33:27.710570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.960 [2024-07-26 18:33:27.710814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.960 [2024-07-26 18:33:27.710837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.960 [2024-07-26 18:33:27.710852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.960 [2024-07-26 18:33:27.714444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.960 [2024-07-26 18:33:27.723744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.960 [2024-07-26 18:33:27.724187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.960 [2024-07-26 18:33:27.724218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.960 [2024-07-26 18:33:27.724235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.960 [2024-07-26 18:33:27.724475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.960 [2024-07-26 18:33:27.724719] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.960 [2024-07-26 18:33:27.724743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.960 [2024-07-26 18:33:27.724758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.960 [2024-07-26 18:33:27.728353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.960 [2024-07-26 18:33:27.737653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.960 [2024-07-26 18:33:27.738095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.960 [2024-07-26 18:33:27.738126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.960 [2024-07-26 18:33:27.738150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.960 [2024-07-26 18:33:27.738390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.960 [2024-07-26 18:33:27.738634] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.960 [2024-07-26 18:33:27.738658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.960 [2024-07-26 18:33:27.738673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.960 [2024-07-26 18:33:27.742262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.960 [2024-07-26 18:33:27.751558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.960 [2024-07-26 18:33:27.752008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.960 [2024-07-26 18:33:27.752038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.960 [2024-07-26 18:33:27.752056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.960 [2024-07-26 18:33:27.752307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.960 [2024-07-26 18:33:27.752552] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.960 [2024-07-26 18:33:27.752575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.960 [2024-07-26 18:33:27.752591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.960 [2024-07-26 18:33:27.756180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.960 [2024-07-26 18:33:27.765501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.960 [2024-07-26 18:33:27.765950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.960 [2024-07-26 18:33:27.765981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.960 [2024-07-26 18:33:27.765999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.960 [2024-07-26 18:33:27.766250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.960 [2024-07-26 18:33:27.766494] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.961 [2024-07-26 18:33:27.766518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.961 [2024-07-26 18:33:27.766533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.961 [2024-07-26 18:33:27.770121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.961 [2024-07-26 18:33:27.779422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.961 [2024-07-26 18:33:27.779864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.961 [2024-07-26 18:33:27.779889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.961 [2024-07-26 18:33:27.779919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.961 [2024-07-26 18:33:27.780178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.961 [2024-07-26 18:33:27.780423] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.961 [2024-07-26 18:33:27.780454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.961 [2024-07-26 18:33:27.780470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.961 [2024-07-26 18:33:27.784051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.961 [2024-07-26 18:33:27.793363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.961 [2024-07-26 18:33:27.793808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.961 [2024-07-26 18:33:27.793839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.961 [2024-07-26 18:33:27.793857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.961 [2024-07-26 18:33:27.794108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.961 [2024-07-26 18:33:27.794353] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.961 [2024-07-26 18:33:27.794377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.961 [2024-07-26 18:33:27.794392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.961 [2024-07-26 18:33:27.797973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.961 [2024-07-26 18:33:27.807312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.961 [2024-07-26 18:33:27.807758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.961 [2024-07-26 18:33:27.807789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.961 [2024-07-26 18:33:27.807807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.961 [2024-07-26 18:33:27.808045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.961 [2024-07-26 18:33:27.808302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.961 [2024-07-26 18:33:27.808327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.961 [2024-07-26 18:33:27.808342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.961 [2024-07-26 18:33:27.811933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.961 [2024-07-26 18:33:27.821237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.961 [2024-07-26 18:33:27.821656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.961 [2024-07-26 18:33:27.821687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.961 [2024-07-26 18:33:27.821704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.961 [2024-07-26 18:33:27.821943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.961 [2024-07-26 18:33:27.822200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.961 [2024-07-26 18:33:27.822225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.961 [2024-07-26 18:33:27.822240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.961 [2024-07-26 18:33:27.825823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.961 [2024-07-26 18:33:27.835136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.961 [2024-07-26 18:33:27.835580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.961 [2024-07-26 18:33:27.835611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.961 [2024-07-26 18:33:27.835629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.961 [2024-07-26 18:33:27.835869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.961 [2024-07-26 18:33:27.836125] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.961 [2024-07-26 18:33:27.836149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.961 [2024-07-26 18:33:27.836164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.961 [2024-07-26 18:33:27.839742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.961 [2024-07-26 18:33:27.849040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.961 [2024-07-26 18:33:27.849503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.961 [2024-07-26 18:33:27.849534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.961 [2024-07-26 18:33:27.849552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.961 [2024-07-26 18:33:27.849791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.961 [2024-07-26 18:33:27.850034] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.961 [2024-07-26 18:33:27.850067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.961 [2024-07-26 18:33:27.850085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.961 [2024-07-26 18:33:27.853665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.961 [2024-07-26 18:33:27.862977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.961 [2024-07-26 18:33:27.863428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.961 [2024-07-26 18:33:27.863459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.961 [2024-07-26 18:33:27.863477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.961 [2024-07-26 18:33:27.863717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.961 [2024-07-26 18:33:27.863960] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.961 [2024-07-26 18:33:27.863984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.961 [2024-07-26 18:33:27.863999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.961 [2024-07-26 18:33:27.867591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.961 [2024-07-26 18:33:27.876893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.961 [2024-07-26 18:33:27.877317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.961 [2024-07-26 18:33:27.877348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.961 [2024-07-26 18:33:27.877365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.961 [2024-07-26 18:33:27.877610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.961 [2024-07-26 18:33:27.877855] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.961 [2024-07-26 18:33:27.877879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.961 [2024-07-26 18:33:27.877894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.961 [2024-07-26 18:33:27.881483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.961 [2024-07-26 18:33:27.890792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.961 [2024-07-26 18:33:27.891259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.961 [2024-07-26 18:33:27.891289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.961 [2024-07-26 18:33:27.891307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.961 [2024-07-26 18:33:27.891546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.961 [2024-07-26 18:33:27.891791] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.961 [2024-07-26 18:33:27.891815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.961 [2024-07-26 18:33:27.891830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.961 [2024-07-26 18:33:27.895424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.961 [2024-07-26 18:33:27.904727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.961 [2024-07-26 18:33:27.905170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.961 [2024-07-26 18:33:27.905201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.961 [2024-07-26 18:33:27.905219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.961 [2024-07-26 18:33:27.905458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.962 [2024-07-26 18:33:27.905702] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.962 [2024-07-26 18:33:27.905725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.962 [2024-07-26 18:33:27.905741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.962 [2024-07-26 18:33:27.909337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.962 [2024-07-26 18:33:27.918644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.962 [2024-07-26 18:33:27.919072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.962 [2024-07-26 18:33:27.919113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.962 [2024-07-26 18:33:27.919131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.962 [2024-07-26 18:33:27.919370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.962 [2024-07-26 18:33:27.919615] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.962 [2024-07-26 18:33:27.919638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.962 [2024-07-26 18:33:27.919659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.962 [2024-07-26 18:33:27.923263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.962 [2024-07-26 18:33:27.932591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.962 [2024-07-26 18:33:27.933009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.962 [2024-07-26 18:33:27.933048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.962 [2024-07-26 18:33:27.933076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.962 [2024-07-26 18:33:27.933317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.962 [2024-07-26 18:33:27.933561] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.962 [2024-07-26 18:33:27.933585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.962 [2024-07-26 18:33:27.933600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.962 [2024-07-26 18:33:27.937194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.962 [2024-07-26 18:33:27.946509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.962 [2024-07-26 18:33:27.946946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.962 [2024-07-26 18:33:27.946976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.962 [2024-07-26 18:33:27.946994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.962 [2024-07-26 18:33:27.947267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.962 [2024-07-26 18:33:27.947512] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.962 [2024-07-26 18:33:27.947538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.962 [2024-07-26 18:33:27.947554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.962 [2024-07-26 18:33:27.951147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.962 [2024-07-26 18:33:27.960500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.962 [2024-07-26 18:33:27.960927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.962 [2024-07-26 18:33:27.960958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.962 [2024-07-26 18:33:27.960975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.962 [2024-07-26 18:33:27.961226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.962 [2024-07-26 18:33:27.961470] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.962 [2024-07-26 18:33:27.961494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.962 [2024-07-26 18:33:27.961509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.962 [2024-07-26 18:33:27.965106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.962 [2024-07-26 18:33:27.974437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.962 [2024-07-26 18:33:27.974880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.962 [2024-07-26 18:33:27.974916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.962 [2024-07-26 18:33:27.974935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.962 [2024-07-26 18:33:27.975184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.962 [2024-07-26 18:33:27.975429] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.962 [2024-07-26 18:33:27.975454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.962 [2024-07-26 18:33:27.975468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.962 [2024-07-26 18:33:27.979075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.962 [2024-07-26 18:33:27.988408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.962 [2024-07-26 18:33:27.988966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.962 [2024-07-26 18:33:27.989024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.962 [2024-07-26 18:33:27.989041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.962 [2024-07-26 18:33:27.989291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.962 [2024-07-26 18:33:27.989545] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.962 [2024-07-26 18:33:27.989568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.962 [2024-07-26 18:33:27.989583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.962 [2024-07-26 18:33:27.993184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.962 [2024-07-26 18:33:28.002296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.962 [2024-07-26 18:33:28.002763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.962 [2024-07-26 18:33:28.002793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.962 [2024-07-26 18:33:28.002811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.962 [2024-07-26 18:33:28.003051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.962 [2024-07-26 18:33:28.003307] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.962 [2024-07-26 18:33:28.003331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.962 [2024-07-26 18:33:28.003346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.962 [2024-07-26 18:33:28.006939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.962 [2024-07-26 18:33:28.016285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.962 [2024-07-26 18:33:28.016728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.962 [2024-07-26 18:33:28.016759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.962 [2024-07-26 18:33:28.016777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.962 [2024-07-26 18:33:28.017016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.962 [2024-07-26 18:33:28.017276] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.962 [2024-07-26 18:33:28.017301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.962 [2024-07-26 18:33:28.017317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.962 [2024-07-26 18:33:28.020898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.962 [2024-07-26 18:33:28.030241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.962 [2024-07-26 18:33:28.030671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.962 [2024-07-26 18:33:28.030701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.962 [2024-07-26 18:33:28.030719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.962 [2024-07-26 18:33:28.030959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.962 [2024-07-26 18:33:28.031217] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.962 [2024-07-26 18:33:28.031242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.962 [2024-07-26 18:33:28.031257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.962 [2024-07-26 18:33:28.034846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.962 [2024-07-26 18:33:28.044198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.962 [2024-07-26 18:33:28.044619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.962 [2024-07-26 18:33:28.044650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.962 [2024-07-26 18:33:28.044668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.962 [2024-07-26 18:33:28.044908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.962 [2024-07-26 18:33:28.045165] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.962 [2024-07-26 18:33:28.045190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.963 [2024-07-26 18:33:28.045205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.963 [2024-07-26 18:33:28.048792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.963 [2024-07-26 18:33:28.058122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.963 [2024-07-26 18:33:28.058575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.963 [2024-07-26 18:33:28.058606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.963 [2024-07-26 18:33:28.058623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.963 [2024-07-26 18:33:28.058863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.963 [2024-07-26 18:33:28.059120] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.963 [2024-07-26 18:33:28.059144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.963 [2024-07-26 18:33:28.059159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.963 [2024-07-26 18:33:28.062763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.963 [2024-07-26 18:33:28.072109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.963 [2024-07-26 18:33:28.072666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.963 [2024-07-26 18:33:28.072719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.963 [2024-07-26 18:33:28.072736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.963 [2024-07-26 18:33:28.072976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.963 [2024-07-26 18:33:28.073231] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.963 [2024-07-26 18:33:28.073257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.963 [2024-07-26 18:33:28.073272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.963 [2024-07-26 18:33:28.076875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.963 [2024-07-26 18:33:28.085990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.963 [2024-07-26 18:33:28.086420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.963 [2024-07-26 18:33:28.086451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:01.963 [2024-07-26 18:33:28.086468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:01.963 [2024-07-26 18:33:28.086708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:01.963 [2024-07-26 18:33:28.086952] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.963 [2024-07-26 18:33:28.086976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.963 [2024-07-26 18:33:28.086991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.963 [2024-07-26 18:33:28.090592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.222 [2024-07-26 18:33:28.099931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.222 [2024-07-26 18:33:28.100389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.222 [2024-07-26 18:33:28.100421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.222 [2024-07-26 18:33:28.100439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.222 [2024-07-26 18:33:28.100678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.222 [2024-07-26 18:33:28.100922] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.222 [2024-07-26 18:33:28.100946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.222 [2024-07-26 18:33:28.100961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.222 [2024-07-26 18:33:28.104556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.222 [2024-07-26 18:33:28.113895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.222 [2024-07-26 18:33:28.114377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.222 [2024-07-26 18:33:28.114435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.222 [2024-07-26 18:33:28.114459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.222 [2024-07-26 18:33:28.114699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.222 [2024-07-26 18:33:28.114943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.222 [2024-07-26 18:33:28.114967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.222 [2024-07-26 18:33:28.114982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.222 [2024-07-26 18:33:28.118574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.222 [2024-07-26 18:33:28.127898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.222 [2024-07-26 18:33:28.128326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.222 [2024-07-26 18:33:28.128357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.222 [2024-07-26 18:33:28.128375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.222 [2024-07-26 18:33:28.128614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.223 [2024-07-26 18:33:28.128858] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.223 [2024-07-26 18:33:28.128882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.223 [2024-07-26 18:33:28.128897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.223 [2024-07-26 18:33:28.132486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.223 [2024-07-26 18:33:28.141830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.223 [2024-07-26 18:33:28.142292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.223 [2024-07-26 18:33:28.142324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.223 [2024-07-26 18:33:28.142341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.223 [2024-07-26 18:33:28.142581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.223 [2024-07-26 18:33:28.142825] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.223 [2024-07-26 18:33:28.142849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.223 [2024-07-26 18:33:28.142864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.223 [2024-07-26 18:33:28.146454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.223 [2024-07-26 18:33:28.155751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.223 [2024-07-26 18:33:28.156202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.223 [2024-07-26 18:33:28.156230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.223 [2024-07-26 18:33:28.156246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.223 [2024-07-26 18:33:28.156505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.223 [2024-07-26 18:33:28.156749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.223 [2024-07-26 18:33:28.156778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.223 [2024-07-26 18:33:28.156794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.223 [2024-07-26 18:33:28.160403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.223 [2024-07-26 18:33:28.169706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.223 [2024-07-26 18:33:28.170160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.223 [2024-07-26 18:33:28.170191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.223 [2024-07-26 18:33:28.170208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.223 [2024-07-26 18:33:28.170448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.223 [2024-07-26 18:33:28.170693] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.223 [2024-07-26 18:33:28.170716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.223 [2024-07-26 18:33:28.170732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.223 [2024-07-26 18:33:28.174323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.223 [2024-07-26 18:33:28.183623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.223 [2024-07-26 18:33:28.184069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.223 [2024-07-26 18:33:28.184101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.223 [2024-07-26 18:33:28.184118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.223 [2024-07-26 18:33:28.184357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.223 [2024-07-26 18:33:28.184601] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.223 [2024-07-26 18:33:28.184625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.223 [2024-07-26 18:33:28.184640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.223 [2024-07-26 18:33:28.188230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.223 [2024-07-26 18:33:28.197527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.223 [2024-07-26 18:33:28.197944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.223 [2024-07-26 18:33:28.197974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.223 [2024-07-26 18:33:28.197991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.223 [2024-07-26 18:33:28.198242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.223 [2024-07-26 18:33:28.198486] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.223 [2024-07-26 18:33:28.198510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.223 [2024-07-26 18:33:28.198525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.223 [2024-07-26 18:33:28.202112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.223 [2024-07-26 18:33:28.211425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.223 [2024-07-26 18:33:28.212008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.223 [2024-07-26 18:33:28.212066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.223 [2024-07-26 18:33:28.212086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.223 [2024-07-26 18:33:28.212326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.223 [2024-07-26 18:33:28.212570] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.223 [2024-07-26 18:33:28.212594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.223 [2024-07-26 18:33:28.212609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.223 [2024-07-26 18:33:28.216202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.223 [2024-07-26 18:33:28.225290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.223 [2024-07-26 18:33:28.225881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.223 [2024-07-26 18:33:28.225935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.223 [2024-07-26 18:33:28.225953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.223 [2024-07-26 18:33:28.226204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.223 [2024-07-26 18:33:28.226449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.223 [2024-07-26 18:33:28.226472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.223 [2024-07-26 18:33:28.226487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.223 [2024-07-26 18:33:28.230078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.223 [2024-07-26 18:33:28.239179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.223 [2024-07-26 18:33:28.239609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.223 [2024-07-26 18:33:28.239636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.223 [2024-07-26 18:33:28.239668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.223 [2024-07-26 18:33:28.239915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.223 [2024-07-26 18:33:28.240180] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.223 [2024-07-26 18:33:28.240206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.223 [2024-07-26 18:33:28.240222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.223 [2024-07-26 18:33:28.243812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.223 [2024-07-26 18:33:28.253136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.223 [2024-07-26 18:33:28.253563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.223 [2024-07-26 18:33:28.253594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.223 [2024-07-26 18:33:28.253611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.223 [2024-07-26 18:33:28.253856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.223 [2024-07-26 18:33:28.254113] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.224 [2024-07-26 18:33:28.254147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.224 [2024-07-26 18:33:28.254162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.224 [2024-07-26 18:33:28.257745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.224 [2024-07-26 18:33:28.267179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.224 [2024-07-26 18:33:28.267725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.224 [2024-07-26 18:33:28.267775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.224 [2024-07-26 18:33:28.267793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.224 [2024-07-26 18:33:28.268033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.224 [2024-07-26 18:33:28.268287] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.224 [2024-07-26 18:33:28.268312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.224 [2024-07-26 18:33:28.268329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.224 [2024-07-26 18:33:28.271920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.224 [2024-07-26 18:33:28.281244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.224 [2024-07-26 18:33:28.281686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.224 [2024-07-26 18:33:28.281717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.224 [2024-07-26 18:33:28.281736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.224 [2024-07-26 18:33:28.281975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.224 [2024-07-26 18:33:28.282231] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.224 [2024-07-26 18:33:28.282255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.224 [2024-07-26 18:33:28.282271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.224 [2024-07-26 18:33:28.285852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.224 [2024-07-26 18:33:28.295174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.224 [2024-07-26 18:33:28.295618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.224 [2024-07-26 18:33:28.295649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.224 [2024-07-26 18:33:28.295668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.224 [2024-07-26 18:33:28.295907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.224 [2024-07-26 18:33:28.296162] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.224 [2024-07-26 18:33:28.296186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.224 [2024-07-26 18:33:28.296207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.224 [2024-07-26 18:33:28.299785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.224 [2024-07-26 18:33:28.309099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.224 [2024-07-26 18:33:28.309561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.224 [2024-07-26 18:33:28.309591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.224 [2024-07-26 18:33:28.309609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.224 [2024-07-26 18:33:28.309848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.224 [2024-07-26 18:33:28.310103] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.224 [2024-07-26 18:33:28.310127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.224 [2024-07-26 18:33:28.310143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.224 [2024-07-26 18:33:28.313724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.224 [2024-07-26 18:33:28.323057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.224 [2024-07-26 18:33:28.323528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.224 [2024-07-26 18:33:28.323553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.224 [2024-07-26 18:33:28.323582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.224 [2024-07-26 18:33:28.323828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.224 [2024-07-26 18:33:28.324084] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.224 [2024-07-26 18:33:28.324119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.224 [2024-07-26 18:33:28.324131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.224 [2024-07-26 18:33:28.327681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.224 [2024-07-26 18:33:28.337009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.224 [2024-07-26 18:33:28.337439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.224 [2024-07-26 18:33:28.337470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.224 [2024-07-26 18:33:28.337488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.224 [2024-07-26 18:33:28.337727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.224 [2024-07-26 18:33:28.337970] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.224 [2024-07-26 18:33:28.337994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.224 [2024-07-26 18:33:28.338009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.224 [2024-07-26 18:33:28.341605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.224 [2024-07-26 18:33:28.350918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.224 [2024-07-26 18:33:28.351378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.224 [2024-07-26 18:33:28.351414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.224 [2024-07-26 18:33:28.351433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.224 [2024-07-26 18:33:28.351672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.224 [2024-07-26 18:33:28.351916] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.224 [2024-07-26 18:33:28.351940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.224 [2024-07-26 18:33:28.351956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.224 [2024-07-26 18:33:28.355546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.224 [2024-07-26 18:33:28.364871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.224 [2024-07-26 18:33:28.365309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.224 [2024-07-26 18:33:28.365352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.224 [2024-07-26 18:33:28.365368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.224 [2024-07-26 18:33:28.365612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.224 [2024-07-26 18:33:28.365857] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.484 [2024-07-26 18:33:28.365881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.484 [2024-07-26 18:33:28.365899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.484 [2024-07-26 18:33:28.369496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.484 [2024-07-26 18:33:28.378818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.484 [2024-07-26 18:33:28.379294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-26 18:33:28.379325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.484 [2024-07-26 18:33:28.379343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.484 [2024-07-26 18:33:28.379582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.484 [2024-07-26 18:33:28.379825] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.484 [2024-07-26 18:33:28.379850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.484 [2024-07-26 18:33:28.379866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.484 [2024-07-26 18:33:28.383465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.484 [2024-07-26 18:33:28.392787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.484 [2024-07-26 18:33:28.393251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-26 18:33:28.393282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.484 [2024-07-26 18:33:28.393300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.484 [2024-07-26 18:33:28.393540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.484 [2024-07-26 18:33:28.393791] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.484 [2024-07-26 18:33:28.393817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.484 [2024-07-26 18:33:28.393833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.484 [2024-07-26 18:33:28.397552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.484 [2024-07-26 18:33:28.406666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.484 [2024-07-26 18:33:28.407153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-26 18:33:28.407181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.484 [2024-07-26 18:33:28.407197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.484 [2024-07-26 18:33:28.407446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.484 [2024-07-26 18:33:28.407691] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.484 [2024-07-26 18:33:28.407716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.484 [2024-07-26 18:33:28.407732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.484 [2024-07-26 18:33:28.411330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.484 [2024-07-26 18:33:28.420668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.484 [2024-07-26 18:33:28.421117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-26 18:33:28.421145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.485 [2024-07-26 18:33:28.421161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.485 [2024-07-26 18:33:28.421410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.485 [2024-07-26 18:33:28.421654] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.485 [2024-07-26 18:33:28.421679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.485 [2024-07-26 18:33:28.421695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.485 [2024-07-26 18:33:28.425288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.485 [2024-07-26 18:33:28.434598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.485 [2024-07-26 18:33:28.435043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-26 18:33:28.435083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.485 [2024-07-26 18:33:28.435102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.485 [2024-07-26 18:33:28.435341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.485 [2024-07-26 18:33:28.435586] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.485 [2024-07-26 18:33:28.435611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.485 [2024-07-26 18:33:28.435626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.485 [2024-07-26 18:33:28.439230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.485 [2024-07-26 18:33:28.448533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.485 [2024-07-26 18:33:28.449121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-26 18:33:28.449153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.485 [2024-07-26 18:33:28.449172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.485 [2024-07-26 18:33:28.449412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.485 [2024-07-26 18:33:28.449656] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.485 [2024-07-26 18:33:28.449681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.485 [2024-07-26 18:33:28.449698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.485 [2024-07-26 18:33:28.453291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.485 [2024-07-26 18:33:28.462402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.485 [2024-07-26 18:33:28.462850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-26 18:33:28.462882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.485 [2024-07-26 18:33:28.462901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.485 [2024-07-26 18:33:28.463154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.485 [2024-07-26 18:33:28.463401] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.485 [2024-07-26 18:33:28.463427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.485 [2024-07-26 18:33:28.463443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.485 [2024-07-26 18:33:28.467025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.485 [2024-07-26 18:33:28.476334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.485 [2024-07-26 18:33:28.476773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-26 18:33:28.476800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.485 [2024-07-26 18:33:28.476815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.485 [2024-07-26 18:33:28.477053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.485 [2024-07-26 18:33:28.477312] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.485 [2024-07-26 18:33:28.477337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.485 [2024-07-26 18:33:28.477353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.485 [2024-07-26 18:33:28.480937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.485 [2024-07-26 18:33:28.490260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.485 [2024-07-26 18:33:28.490710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-26 18:33:28.490738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.485 [2024-07-26 18:33:28.490762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.485 [2024-07-26 18:33:28.491013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.485 [2024-07-26 18:33:28.491272] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.485 [2024-07-26 18:33:28.491298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.485 [2024-07-26 18:33:28.491313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.485 [2024-07-26 18:33:28.494895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.485 [2024-07-26 18:33:28.504235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.485 [2024-07-26 18:33:28.504751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-26 18:33:28.504778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.485 [2024-07-26 18:33:28.504794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.485 [2024-07-26 18:33:28.505049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.485 [2024-07-26 18:33:28.505306] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.485 [2024-07-26 18:33:28.505330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.485 [2024-07-26 18:33:28.505347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.485 [2024-07-26 18:33:28.508940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.485 [2024-07-26 18:33:28.518278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.485 [2024-07-26 18:33:28.518885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-26 18:33:28.518941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.485 [2024-07-26 18:33:28.518959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.485 [2024-07-26 18:33:28.519210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.485 [2024-07-26 18:33:28.519456] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.485 [2024-07-26 18:33:28.519481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.485 [2024-07-26 18:33:28.519497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.485 [2024-07-26 18:33:28.523086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.485 [2024-07-26 18:33:28.532186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.485 [2024-07-26 18:33:28.532677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-26 18:33:28.532728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.485 [2024-07-26 18:33:28.532747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.485 [2024-07-26 18:33:28.532986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.485 [2024-07-26 18:33:28.533241] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.485 [2024-07-26 18:33:28.533271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.485 [2024-07-26 18:33:28.533288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.485 [2024-07-26 18:33:28.536878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.485 [2024-07-26 18:33:28.546207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.485 [2024-07-26 18:33:28.546705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-26 18:33:28.546756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.485 [2024-07-26 18:33:28.546774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.486 [2024-07-26 18:33:28.547014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.486 [2024-07-26 18:33:28.547269] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.486 [2024-07-26 18:33:28.547295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.486 [2024-07-26 18:33:28.547311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.486 [2024-07-26 18:33:28.550891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.486 [2024-07-26 18:33:28.560224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.486 [2024-07-26 18:33:28.560667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-26 18:33:28.560715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.486 [2024-07-26 18:33:28.560733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.486 [2024-07-26 18:33:28.560973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.486 [2024-07-26 18:33:28.561230] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.486 [2024-07-26 18:33:28.561257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.486 [2024-07-26 18:33:28.561273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.486 [2024-07-26 18:33:28.564871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.486 [2024-07-26 18:33:28.574095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.486 [2024-07-26 18:33:28.574524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-26 18:33:28.574555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.486 [2024-07-26 18:33:28.574572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.486 [2024-07-26 18:33:28.574804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.486 [2024-07-26 18:33:28.575040] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.486 [2024-07-26 18:33:28.575075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.486 [2024-07-26 18:33:28.575094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.486 [2024-07-26 18:33:28.578457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.486 [2024-07-26 18:33:28.587554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.486 [2024-07-26 18:33:28.588020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-26 18:33:28.588049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.486 [2024-07-26 18:33:28.588076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.486 [2024-07-26 18:33:28.588324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.486 [2024-07-26 18:33:28.588531] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.486 [2024-07-26 18:33:28.588553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.486 [2024-07-26 18:33:28.588567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.486 [2024-07-26 18:33:28.591535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.486 [2024-07-26 18:33:28.600842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.486 [2024-07-26 18:33:28.601323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-26 18:33:28.601367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.486 [2024-07-26 18:33:28.601384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.486 [2024-07-26 18:33:28.601614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.486 [2024-07-26 18:33:28.601810] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.486 [2024-07-26 18:33:28.601831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.486 [2024-07-26 18:33:28.601844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.486 [2024-07-26 18:33:28.604853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.486 [2024-07-26 18:33:28.614142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.486 [2024-07-26 18:33:28.614556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-26 18:33:28.614583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.486 [2024-07-26 18:33:28.614598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.486 [2024-07-26 18:33:28.614830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.486 [2024-07-26 18:33:28.615025] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.486 [2024-07-26 18:33:28.615046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.486 [2024-07-26 18:33:28.615083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.486 [2024-07-26 18:33:28.618065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.746 [2024-07-26 18:33:28.627629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.746 [2024-07-26 18:33:28.627995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.746 [2024-07-26 18:33:28.628023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.746 [2024-07-26 18:33:28.628040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.746 [2024-07-26 18:33:28.628287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.746 [2024-07-26 18:33:28.628527] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.746 [2024-07-26 18:33:28.628564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.746 [2024-07-26 18:33:28.628578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.746 [2024-07-26 18:33:28.631545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.746 [2024-07-26 18:33:28.640906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.746 [2024-07-26 18:33:28.641312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.746 [2024-07-26 18:33:28.641342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.746 [2024-07-26 18:33:28.641359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.746 [2024-07-26 18:33:28.641602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.746 [2024-07-26 18:33:28.641815] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.746 [2024-07-26 18:33:28.641836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.746 [2024-07-26 18:33:28.641849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.746 [2024-07-26 18:33:28.644940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.746 [2024-07-26 18:33:28.654212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.746 [2024-07-26 18:33:28.654662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.746 [2024-07-26 18:33:28.654690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.746 [2024-07-26 18:33:28.654705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.746 [2024-07-26 18:33:28.654941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.746 [2024-07-26 18:33:28.655199] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.746 [2024-07-26 18:33:28.655223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.746 [2024-07-26 18:33:28.655236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.746 [2024-07-26 18:33:28.658217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.746 [2024-07-26 18:33:28.667527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.746 [2024-07-26 18:33:28.667879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.746 [2024-07-26 18:33:28.667906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.746 [2024-07-26 18:33:28.667922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.746 [2024-07-26 18:33:28.668170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.746 [2024-07-26 18:33:28.668394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.746 [2024-07-26 18:33:28.668416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.746 [2024-07-26 18:33:28.668449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.746 [2024-07-26 18:33:28.671421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.746 [2024-07-26 18:33:28.680736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.746 [2024-07-26 18:33:28.681204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.746 [2024-07-26 18:33:28.681233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.746 [2024-07-26 18:33:28.681249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.746 [2024-07-26 18:33:28.681504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.746 [2024-07-26 18:33:28.681699] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.746 [2024-07-26 18:33:28.681721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.746 [2024-07-26 18:33:28.681734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.746 [2024-07-26 18:33:28.684745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.746 [2024-07-26 18:33:28.694029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.746 [2024-07-26 18:33:28.694435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.746 [2024-07-26 18:33:28.694463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.747 [2024-07-26 18:33:28.694479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.747 [2024-07-26 18:33:28.694710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.747 [2024-07-26 18:33:28.694905] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.747 [2024-07-26 18:33:28.694926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.747 [2024-07-26 18:33:28.694938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.747 [2024-07-26 18:33:28.697949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.747 [2024-07-26 18:33:28.707327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.747 [2024-07-26 18:33:28.707763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.747 [2024-07-26 18:33:28.707791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.747 [2024-07-26 18:33:28.707806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.747 [2024-07-26 18:33:28.708056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.747 [2024-07-26 18:33:28.708290] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.747 [2024-07-26 18:33:28.708312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.747 [2024-07-26 18:33:28.708325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.747 [2024-07-26 18:33:28.711295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.747 [2024-07-26 18:33:28.720671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.747 [2024-07-26 18:33:28.721085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.747 [2024-07-26 18:33:28.721117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.747 [2024-07-26 18:33:28.721134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.747 [2024-07-26 18:33:28.721369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.747 [2024-07-26 18:33:28.721580] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.747 [2024-07-26 18:33:28.721602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.747 [2024-07-26 18:33:28.721616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.747 [2024-07-26 18:33:28.724610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.747 [2024-07-26 18:33:28.733866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.747 [2024-07-26 18:33:28.734333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.747 [2024-07-26 18:33:28.734378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.747 [2024-07-26 18:33:28.734394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.747 [2024-07-26 18:33:28.734631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.747 [2024-07-26 18:33:28.734841] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.747 [2024-07-26 18:33:28.734862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.747 [2024-07-26 18:33:28.734875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.747 [2024-07-26 18:33:28.737888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.747 [2024-07-26 18:33:28.747112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.747 [2024-07-26 18:33:28.747473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.747 [2024-07-26 18:33:28.747501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.747 [2024-07-26 18:33:28.747516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.747 [2024-07-26 18:33:28.747734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.747 [2024-07-26 18:33:28.747945] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.747 [2024-07-26 18:33:28.747966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.747 [2024-07-26 18:33:28.747980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.747 [2024-07-26 18:33:28.750956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.747 [2024-07-26 18:33:28.760449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.747 [2024-07-26 18:33:28.760869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.747 [2024-07-26 18:33:28.760896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.747 [2024-07-26 18:33:28.760912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.747 [2024-07-26 18:33:28.761125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.747 [2024-07-26 18:33:28.761337] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.747 [2024-07-26 18:33:28.761374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.747 [2024-07-26 18:33:28.761388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.747 [2024-07-26 18:33:28.764380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.747 [2024-07-26 18:33:28.773672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.747 [2024-07-26 18:33:28.774021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.747 [2024-07-26 18:33:28.774068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.747 [2024-07-26 18:33:28.774086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.747 [2024-07-26 18:33:28.774303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.747 [2024-07-26 18:33:28.774516] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.747 [2024-07-26 18:33:28.774537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.747 [2024-07-26 18:33:28.774550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.747 [2024-07-26 18:33:28.777556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.747 [2024-07-26 18:33:28.786977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.747 [2024-07-26 18:33:28.787471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.747 [2024-07-26 18:33:28.787500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.747 [2024-07-26 18:33:28.787516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.747 [2024-07-26 18:33:28.787769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.747 [2024-07-26 18:33:28.787963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.747 [2024-07-26 18:33:28.787985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.747 [2024-07-26 18:33:28.787997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.747 [2024-07-26 18:33:28.791009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.747 [2024-07-26 18:33:28.800333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.747 [2024-07-26 18:33:28.800812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.747 [2024-07-26 18:33:28.800840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.747 [2024-07-26 18:33:28.800856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.747 [2024-07-26 18:33:28.801118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.747 [2024-07-26 18:33:28.801326] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.747 [2024-07-26 18:33:28.801347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.747 [2024-07-26 18:33:28.801361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.747 [2024-07-26 18:33:28.804365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.747 [2024-07-26 18:33:28.813668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.747 [2024-07-26 18:33:28.814139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.747 [2024-07-26 18:33:28.814168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.747 [2024-07-26 18:33:28.814185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.747 [2024-07-26 18:33:28.814437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.747 [2024-07-26 18:33:28.814632] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.747 [2024-07-26 18:33:28.814653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.747 [2024-07-26 18:33:28.814667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.747 [2024-07-26 18:33:28.817640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.747 [2024-07-26 18:33:28.826916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.748 [2024-07-26 18:33:28.827356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.748 [2024-07-26 18:33:28.827401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.748 [2024-07-26 18:33:28.827417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.748 [2024-07-26 18:33:28.827665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.748 [2024-07-26 18:33:28.827859] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.748 [2024-07-26 18:33:28.827880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.748 [2024-07-26 18:33:28.827893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.748 [2024-07-26 18:33:28.830897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.748 [2024-07-26 18:33:28.840229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.748 [2024-07-26 18:33:28.840644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.748 [2024-07-26 18:33:28.840673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.748 [2024-07-26 18:33:28.840689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.748 [2024-07-26 18:33:28.840922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.748 [2024-07-26 18:33:28.841160] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.748 [2024-07-26 18:33:28.841183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.748 [2024-07-26 18:33:28.841197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.748 [2024-07-26 18:33:28.844257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.748 [2024-07-26 18:33:28.853544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.748 [2024-07-26 18:33:28.853941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.748 [2024-07-26 18:33:28.853969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.748 [2024-07-26 18:33:28.853989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.748 [2024-07-26 18:33:28.854236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.748 [2024-07-26 18:33:28.854455] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.748 [2024-07-26 18:33:28.854476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.748 [2024-07-26 18:33:28.854490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.748 [2024-07-26 18:33:28.857453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.748 [2024-07-26 18:33:28.866702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.748 [2024-07-26 18:33:28.867106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.748 [2024-07-26 18:33:28.867135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.748 [2024-07-26 18:33:28.867152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.748 [2024-07-26 18:33:28.867407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.748 [2024-07-26 18:33:28.867602] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.748 [2024-07-26 18:33:28.867623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.748 [2024-07-26 18:33:28.867636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.748 [2024-07-26 18:33:28.870628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.748 [2024-07-26 18:33:28.879871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.748 [2024-07-26 18:33:28.880379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.748 [2024-07-26 18:33:28.880407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:02.748 [2024-07-26 18:33:28.880423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:02.748 [2024-07-26 18:33:28.880671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:02.748 [2024-07-26 18:33:28.880866] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.748 [2024-07-26 18:33:28.880886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.748 [2024-07-26 18:33:28.880899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.748 [2024-07-26 18:33:28.883907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.007 [2024-07-26 18:33:28.893449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.007 [2024-07-26 18:33:28.893854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.007 [2024-07-26 18:33:28.893884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.007 [2024-07-26 18:33:28.893900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.007 [2024-07-26 18:33:28.894139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.007 [2024-07-26 18:33:28.894395] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.007 [2024-07-26 18:33:28.894422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.007 [2024-07-26 18:33:28.894453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.007 [2024-07-26 18:33:28.897657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.007 [2024-07-26 18:33:28.906795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.007 [2024-07-26 18:33:28.907195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.007 [2024-07-26 18:33:28.907224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.007 [2024-07-26 18:33:28.907241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.007 [2024-07-26 18:33:28.907483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.007 [2024-07-26 18:33:28.907678] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.007 [2024-07-26 18:33:28.907700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.007 [2024-07-26 18:33:28.907713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.007 [2024-07-26 18:33:28.910748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.007 [2024-07-26 18:33:28.920018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.007 [2024-07-26 18:33:28.920525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.007 [2024-07-26 18:33:28.920552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.007 [2024-07-26 18:33:28.920567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.007 [2024-07-26 18:33:28.920783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.007 [2024-07-26 18:33:28.920994] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.007 [2024-07-26 18:33:28.921015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.007 [2024-07-26 18:33:28.921028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.007 [2024-07-26 18:33:28.924031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.007 [2024-07-26 18:33:28.933325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.007 [2024-07-26 18:33:28.933704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.007 [2024-07-26 18:33:28.933731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.007 [2024-07-26 18:33:28.933747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.007 [2024-07-26 18:33:28.933977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.007 [2024-07-26 18:33:28.934220] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.007 [2024-07-26 18:33:28.934243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.007 [2024-07-26 18:33:28.934257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.007 [2024-07-26 18:33:28.937236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.007 [2024-07-26 18:33:28.946664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.007 [2024-07-26 18:33:28.947138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.007 [2024-07-26 18:33:28.947167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.007 [2024-07-26 18:33:28.947184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.007 [2024-07-26 18:33:28.947438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.007 [2024-07-26 18:33:28.947632] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.007 [2024-07-26 18:33:28.947653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.007 [2024-07-26 18:33:28.947665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.007 [2024-07-26 18:33:28.950672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.007 [2024-07-26 18:33:28.959927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.007 [2024-07-26 18:33:28.960422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.007 [2024-07-26 18:33:28.960452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.007 [2024-07-26 18:33:28.960469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.007 [2024-07-26 18:33:28.960721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.007 [2024-07-26 18:33:28.960916] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.007 [2024-07-26 18:33:28.960937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.007 [2024-07-26 18:33:28.960949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.007 [2024-07-26 18:33:28.963965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.007 [2024-07-26 18:33:28.973248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.007 [2024-07-26 18:33:28.973729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.007 [2024-07-26 18:33:28.973757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.007 [2024-07-26 18:33:28.973773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.007 [2024-07-26 18:33:28.974020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.007 [2024-07-26 18:33:28.974263] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.007 [2024-07-26 18:33:28.974285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.008 [2024-07-26 18:33:28.974299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.008 [2024-07-26 18:33:28.977264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.008 [2024-07-26 18:33:28.986527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.008 [2024-07-26 18:33:28.986901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.008 [2024-07-26 18:33:28.986930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.008 [2024-07-26 18:33:28.986946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.008 [2024-07-26 18:33:28.987197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.008 [2024-07-26 18:33:28.987413] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.008 [2024-07-26 18:33:28.987434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.008 [2024-07-26 18:33:28.987447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.008 [2024-07-26 18:33:28.990415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.008 [2024-07-26 18:33:28.999745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.008 [2024-07-26 18:33:29.000146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.008 [2024-07-26 18:33:29.000175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.008 [2024-07-26 18:33:29.000193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.008 [2024-07-26 18:33:29.000441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.008 [2024-07-26 18:33:29.000638] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.008 [2024-07-26 18:33:29.000659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.008 [2024-07-26 18:33:29.000673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.008 [2024-07-26 18:33:29.003761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.008 [2024-07-26 18:33:29.012924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.008 [2024-07-26 18:33:29.013357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.008 [2024-07-26 18:33:29.013386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.008 [2024-07-26 18:33:29.013403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.008 [2024-07-26 18:33:29.013621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.008 [2024-07-26 18:33:29.013831] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.008 [2024-07-26 18:33:29.013852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.008 [2024-07-26 18:33:29.013865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.008 [2024-07-26 18:33:29.016876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.008 [2024-07-26 18:33:29.026184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.008 [2024-07-26 18:33:29.026621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.008 [2024-07-26 18:33:29.026649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.008 [2024-07-26 18:33:29.026664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.008 [2024-07-26 18:33:29.026895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.008 [2024-07-26 18:33:29.027117] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.008 [2024-07-26 18:33:29.027155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.008 [2024-07-26 18:33:29.027174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.008 [2024-07-26 18:33:29.030172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.008 [2024-07-26 18:33:29.039554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.008 [2024-07-26 18:33:29.039961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.008 [2024-07-26 18:33:29.039990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.008 [2024-07-26 18:33:29.040007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.008 [2024-07-26 18:33:29.040257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.008 [2024-07-26 18:33:29.040489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.008 [2024-07-26 18:33:29.040510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.008 [2024-07-26 18:33:29.040524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.008 [2024-07-26 18:33:29.043497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.008 [2024-07-26 18:33:29.052900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.008 [2024-07-26 18:33:29.053331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.008 [2024-07-26 18:33:29.053361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.008 [2024-07-26 18:33:29.053378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.008 [2024-07-26 18:33:29.053630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.008 [2024-07-26 18:33:29.053826] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.008 [2024-07-26 18:33:29.053846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.008 [2024-07-26 18:33:29.053860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.008 [2024-07-26 18:33:29.056868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.008 [2024-07-26 18:33:29.066155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.008 [2024-07-26 18:33:29.066595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.008 [2024-07-26 18:33:29.066625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.008 [2024-07-26 18:33:29.066641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.008 [2024-07-26 18:33:29.066894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.008 [2024-07-26 18:33:29.067115] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.008 [2024-07-26 18:33:29.067138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.008 [2024-07-26 18:33:29.067151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.008 [2024-07-26 18:33:29.070182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.008 [2024-07-26 18:33:29.079481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.008 [2024-07-26 18:33:29.079883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.008 [2024-07-26 18:33:29.079916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.008 [2024-07-26 18:33:29.079933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.008 [2024-07-26 18:33:29.080197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.008 [2024-07-26 18:33:29.080413] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.008 [2024-07-26 18:33:29.080433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.008 [2024-07-26 18:33:29.080447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.008 [2024-07-26 18:33:29.083459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.008 [2024-07-26 18:33:29.092769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.008 [2024-07-26 18:33:29.093195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.008 [2024-07-26 18:33:29.093224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.008 [2024-07-26 18:33:29.093241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.008 [2024-07-26 18:33:29.093483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.008 [2024-07-26 18:33:29.093695] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.008 [2024-07-26 18:33:29.093717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.008 [2024-07-26 18:33:29.093730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.008 [2024-07-26 18:33:29.096698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.008 [2024-07-26 18:33:29.106122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.008 [2024-07-26 18:33:29.106538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.008 [2024-07-26 18:33:29.106566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.008 [2024-07-26 18:33:29.106581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.008 [2024-07-26 18:33:29.106810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.008 [2024-07-26 18:33:29.107005] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.008 [2024-07-26 18:33:29.107025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.008 [2024-07-26 18:33:29.107053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.008 [2024-07-26 18:33:29.110032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.009 [2024-07-26 18:33:29.119519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.009 [2024-07-26 18:33:29.119931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.009 [2024-07-26 18:33:29.119959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.009 [2024-07-26 18:33:29.119975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.009 [2024-07-26 18:33:29.120219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.009 [2024-07-26 18:33:29.120441] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.009 [2024-07-26 18:33:29.120461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.009 [2024-07-26 18:33:29.120474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.009 [2024-07-26 18:33:29.123488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.009 [2024-07-26 18:33:29.132841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.009 [2024-07-26 18:33:29.133293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.009 [2024-07-26 18:33:29.133323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.009 [2024-07-26 18:33:29.133339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.009 [2024-07-26 18:33:29.133583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.009 [2024-07-26 18:33:29.133793] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.009 [2024-07-26 18:33:29.133813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.009 [2024-07-26 18:33:29.133826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.009 [2024-07-26 18:33:29.136797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.009 [2024-07-26 18:33:29.146217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.009 [2024-07-26 18:33:29.146689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.009 [2024-07-26 18:33:29.146718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.009 [2024-07-26 18:33:29.146735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.009 [2024-07-26 18:33:29.146964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.009 [2024-07-26 18:33:29.147239] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.009 [2024-07-26 18:33:29.147263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.009 [2024-07-26 18:33:29.147277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.269 [2024-07-26 18:33:29.150678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.269 [2024-07-26 18:33:29.159547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.269 [2024-07-26 18:33:29.160015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.269 [2024-07-26 18:33:29.160044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.269 [2024-07-26 18:33:29.160066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.269 [2024-07-26 18:33:29.160285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.269 [2024-07-26 18:33:29.160518] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.269 [2024-07-26 18:33:29.160539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.269 [2024-07-26 18:33:29.160553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.269 [2024-07-26 18:33:29.163694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.269 [2024-07-26 18:33:29.172893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.269 [2024-07-26 18:33:29.173325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.269 [2024-07-26 18:33:29.173355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.269 [2024-07-26 18:33:29.173371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.269 [2024-07-26 18:33:29.173625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.269 [2024-07-26 18:33:29.173821] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.269 [2024-07-26 18:33:29.173841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.269 [2024-07-26 18:33:29.173853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.269 [2024-07-26 18:33:29.176847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.269 [2024-07-26 18:33:29.186190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.269 [2024-07-26 18:33:29.186611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.269 [2024-07-26 18:33:29.186639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.269 [2024-07-26 18:33:29.186655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.269 [2024-07-26 18:33:29.186889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.269 [2024-07-26 18:33:29.187126] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.269 [2024-07-26 18:33:29.187147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.269 [2024-07-26 18:33:29.187161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.269 [2024-07-26 18:33:29.190156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.269 [2024-07-26 18:33:29.199483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.269 [2024-07-26 18:33:29.199915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.269 [2024-07-26 18:33:29.199942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.269 [2024-07-26 18:33:29.199958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.269 [2024-07-26 18:33:29.200223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.269 [2024-07-26 18:33:29.200458] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.269 [2024-07-26 18:33:29.200479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.269 [2024-07-26 18:33:29.200492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.269 [2024-07-26 18:33:29.203460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.269 [2024-07-26 18:33:29.212846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.269 [2024-07-26 18:33:29.213301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.269 [2024-07-26 18:33:29.213329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.269 [2024-07-26 18:33:29.213351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.269 [2024-07-26 18:33:29.213603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.269 [2024-07-26 18:33:29.213798] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.269 [2024-07-26 18:33:29.213819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.269 [2024-07-26 18:33:29.213831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.269 [2024-07-26 18:33:29.216825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.269 [2024-07-26 18:33:29.226154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.269 [2024-07-26 18:33:29.226636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.269 [2024-07-26 18:33:29.226663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.269 [2024-07-26 18:33:29.226679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.269 [2024-07-26 18:33:29.226931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.269 [2024-07-26 18:33:29.227263] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.269 [2024-07-26 18:33:29.227288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.269 [2024-07-26 18:33:29.227303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.269 [2024-07-26 18:33:29.230314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.270 [2024-07-26 18:33:29.239482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.270 [2024-07-26 18:33:29.239889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.270 [2024-07-26 18:33:29.239918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.270 [2024-07-26 18:33:29.239934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.270 [2024-07-26 18:33:29.240197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.270 [2024-07-26 18:33:29.240434] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.270 [2024-07-26 18:33:29.240456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.270 [2024-07-26 18:33:29.240469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.270 [2024-07-26 18:33:29.243441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.270 [2024-07-26 18:33:29.252805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.270 [2024-07-26 18:33:29.253209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.270 [2024-07-26 18:33:29.253237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.270 [2024-07-26 18:33:29.253253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.270 [2024-07-26 18:33:29.253504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.270 [2024-07-26 18:33:29.253700] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.270 [2024-07-26 18:33:29.253725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.270 [2024-07-26 18:33:29.253739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.270 [2024-07-26 18:33:29.256866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.270 [2024-07-26 18:33:29.266129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.270 [2024-07-26 18:33:29.266538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.270 [2024-07-26 18:33:29.266566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.270 [2024-07-26 18:33:29.266582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.270 [2024-07-26 18:33:29.266816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.270 [2024-07-26 18:33:29.267011] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.270 [2024-07-26 18:33:29.267032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.270 [2024-07-26 18:33:29.267068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.270 [2024-07-26 18:33:29.270180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.270 [2024-07-26 18:33:29.279539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.270 [2024-07-26 18:33:29.279940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.270 [2024-07-26 18:33:29.279968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.270 [2024-07-26 18:33:29.279985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.270 [2024-07-26 18:33:29.280224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.270 [2024-07-26 18:33:29.280469] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.270 [2024-07-26 18:33:29.280491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.270 [2024-07-26 18:33:29.280504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.270 [2024-07-26 18:33:29.283525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.270 [2024-07-26 18:33:29.292729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.270 [2024-07-26 18:33:29.293243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.270 [2024-07-26 18:33:29.293272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.270 [2024-07-26 18:33:29.293289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.270 [2024-07-26 18:33:29.293529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.270 [2024-07-26 18:33:29.293740] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.270 [2024-07-26 18:33:29.293761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.270 [2024-07-26 18:33:29.293775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.270 [2024-07-26 18:33:29.296785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.270 [2024-07-26 18:33:29.306090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.270 [2024-07-26 18:33:29.306561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.270 [2024-07-26 18:33:29.306590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.270 [2024-07-26 18:33:29.306605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.270 [2024-07-26 18:33:29.306853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.270 [2024-07-26 18:33:29.307073] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.270 [2024-07-26 18:33:29.307095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.270 [2024-07-26 18:33:29.307124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.270 [2024-07-26 18:33:29.310113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.270 [2024-07-26 18:33:29.319417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.270 [2024-07-26 18:33:29.319821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.270 [2024-07-26 18:33:29.319849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.270 [2024-07-26 18:33:29.319865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.270 [2024-07-26 18:33:29.320124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.270 [2024-07-26 18:33:29.320332] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.270 [2024-07-26 18:33:29.320354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.270 [2024-07-26 18:33:29.320368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.270 [2024-07-26 18:33:29.323349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.270 [2024-07-26 18:33:29.332642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.270 [2024-07-26 18:33:29.333110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.270 [2024-07-26 18:33:29.333139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.270 [2024-07-26 18:33:29.333155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.270 [2024-07-26 18:33:29.333412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.270 [2024-07-26 18:33:29.333607] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.270 [2024-07-26 18:33:29.333628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.270 [2024-07-26 18:33:29.333642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.270 [2024-07-26 18:33:29.336648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.270 [2024-07-26 18:33:29.345930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.270 [2024-07-26 18:33:29.346370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.270 [2024-07-26 18:33:29.346397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.270 [2024-07-26 18:33:29.346413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.270 [2024-07-26 18:33:29.346632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.270 [2024-07-26 18:33:29.346841] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.270 [2024-07-26 18:33:29.346862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.270 [2024-07-26 18:33:29.346875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.270 [2024-07-26 18:33:29.349868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.270 [2024-07-26 18:33:29.359138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.271 [2024-07-26 18:33:29.359553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.271 [2024-07-26 18:33:29.359581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.271 [2024-07-26 18:33:29.359597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.271 [2024-07-26 18:33:29.359838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.271 [2024-07-26 18:33:29.360047] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.271 [2024-07-26 18:33:29.360092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.271 [2024-07-26 18:33:29.360108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.271 [2024-07-26 18:33:29.363084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.271 [2024-07-26 18:33:29.372398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.271 [2024-07-26 18:33:29.372787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.271 [2024-07-26 18:33:29.372814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.271 [2024-07-26 18:33:29.372830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.271 [2024-07-26 18:33:29.373047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.271 [2024-07-26 18:33:29.373277] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.271 [2024-07-26 18:33:29.373299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.271 [2024-07-26 18:33:29.373312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.271 [2024-07-26 18:33:29.376294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.271 [2024-07-26 18:33:29.385581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.271 [2024-07-26 18:33:29.385990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.271 [2024-07-26 18:33:29.386019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.271 [2024-07-26 18:33:29.386035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.271 [2024-07-26 18:33:29.386274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.271 [2024-07-26 18:33:29.386493] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.271 [2024-07-26 18:33:29.386514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.271 [2024-07-26 18:33:29.386532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.271 [2024-07-26 18:33:29.389499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.271 [2024-07-26 18:33:29.398914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.271 [2024-07-26 18:33:29.399313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.271 [2024-07-26 18:33:29.399342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.271 [2024-07-26 18:33:29.399359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.271 [2024-07-26 18:33:29.399605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.271 [2024-07-26 18:33:29.399799] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.271 [2024-07-26 18:33:29.399820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.271 [2024-07-26 18:33:29.399833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.271 [2024-07-26 18:33:29.402869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.531 [2024-07-26 18:33:29.412704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.531 [2024-07-26 18:33:29.413142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.531 [2024-07-26 18:33:29.413172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.531 [2024-07-26 18:33:29.413188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.531 [2024-07-26 18:33:29.413432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.531 [2024-07-26 18:33:29.413643] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.531 [2024-07-26 18:33:29.413664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.531 [2024-07-26 18:33:29.413678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.531 [2024-07-26 18:33:29.416934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.531 [2024-07-26 18:33:29.426002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.531 [2024-07-26 18:33:29.426394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.531 [2024-07-26 18:33:29.426423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.531 [2024-07-26 18:33:29.426439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.531 [2024-07-26 18:33:29.426681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.531 [2024-07-26 18:33:29.426893] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.531 [2024-07-26 18:33:29.426913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.531 [2024-07-26 18:33:29.426926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.531 [2024-07-26 18:33:29.429918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.531 [2024-07-26 18:33:29.439257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.531 [2024-07-26 18:33:29.439654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.531 [2024-07-26 18:33:29.439687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.531 [2024-07-26 18:33:29.439705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.531 [2024-07-26 18:33:29.439942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.532 [2024-07-26 18:33:29.440183] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.532 [2024-07-26 18:33:29.440205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.532 [2024-07-26 18:33:29.440220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.532 [2024-07-26 18:33:29.443122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.532 [2024-07-26 18:33:29.452506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.532 [2024-07-26 18:33:29.452920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.532 [2024-07-26 18:33:29.452947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.532 [2024-07-26 18:33:29.452962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.532 [2024-07-26 18:33:29.453222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.532 [2024-07-26 18:33:29.453438] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.532 [2024-07-26 18:33:29.453458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.532 [2024-07-26 18:33:29.453471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.532 [2024-07-26 18:33:29.456479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.532 [2024-07-26 18:33:29.465809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.532 [2024-07-26 18:33:29.466209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.532 [2024-07-26 18:33:29.466238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.532 [2024-07-26 18:33:29.466254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.532 [2024-07-26 18:33:29.466504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.532 [2024-07-26 18:33:29.466700] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.532 [2024-07-26 18:33:29.466721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.532 [2024-07-26 18:33:29.466734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.532 [2024-07-26 18:33:29.469794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.532 [2024-07-26 18:33:29.479166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.532 [2024-07-26 18:33:29.479657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.532 [2024-07-26 18:33:29.479686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.532 [2024-07-26 18:33:29.479702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.532 [2024-07-26 18:33:29.479955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.532 [2024-07-26 18:33:29.480200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.532 [2024-07-26 18:33:29.480222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.532 [2024-07-26 18:33:29.480236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.532 [2024-07-26 18:33:29.483345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.532 [2024-07-26 18:33:29.492442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.532 [2024-07-26 18:33:29.492813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.532 [2024-07-26 18:33:29.492841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.532 [2024-07-26 18:33:29.492857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.532 [2024-07-26 18:33:29.493105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.532 [2024-07-26 18:33:29.493357] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.532 [2024-07-26 18:33:29.493380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.532 [2024-07-26 18:33:29.493395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.532 [2024-07-26 18:33:29.496464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.532 [2024-07-26 18:33:29.505728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.532 [2024-07-26 18:33:29.506199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.532 [2024-07-26 18:33:29.506228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.532 [2024-07-26 18:33:29.506245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.532 [2024-07-26 18:33:29.506495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.532 [2024-07-26 18:33:29.506690] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.532 [2024-07-26 18:33:29.506711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.532 [2024-07-26 18:33:29.506724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.532 [2024-07-26 18:33:29.509719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.532 [2024-07-26 18:33:29.518984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.532 [2024-07-26 18:33:29.519481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.532 [2024-07-26 18:33:29.519508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.532 [2024-07-26 18:33:29.519524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.532 [2024-07-26 18:33:29.519759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.532 [2024-07-26 18:33:29.519969] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.532 [2024-07-26 18:33:29.519989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.532 [2024-07-26 18:33:29.520002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.532 [2024-07-26 18:33:29.523011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.532 [2024-07-26 18:33:29.532434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.532 [2024-07-26 18:33:29.532833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.532 [2024-07-26 18:33:29.532861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.532 [2024-07-26 18:33:29.532877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.532 [2024-07-26 18:33:29.533106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.532 [2024-07-26 18:33:29.533312] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.532 [2024-07-26 18:33:29.533349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.532 [2024-07-26 18:33:29.533363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.532 [2024-07-26 18:33:29.536430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.532 [2024-07-26 18:33:29.545668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.532 [2024-07-26 18:33:29.546087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.532 [2024-07-26 18:33:29.546117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.532 [2024-07-26 18:33:29.546133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.532 [2024-07-26 18:33:29.546374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.532 [2024-07-26 18:33:29.546587] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.532 [2024-07-26 18:33:29.546608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.532 [2024-07-26 18:33:29.546621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.532 [2024-07-26 18:33:29.549695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.532 [2024-07-26 18:33:29.558858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.532 [2024-07-26 18:33:29.559259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.532 [2024-07-26 18:33:29.559289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.532 [2024-07-26 18:33:29.559306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.532 [2024-07-26 18:33:29.559560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.532 [2024-07-26 18:33:29.559772] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.532 [2024-07-26 18:33:29.559793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.533 [2024-07-26 18:33:29.559807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.533 [2024-07-26 18:33:29.562774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-26 18:33:29.572148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.533 [2024-07-26 18:33:29.572646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-26 18:33:29.572675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.533 [2024-07-26 18:33:29.572696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.533 [2024-07-26 18:33:29.572947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.533 [2024-07-26 18:33:29.573170] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.533 [2024-07-26 18:33:29.573192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.533 [2024-07-26 18:33:29.573206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.533 [2024-07-26 18:33:29.576194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-26 18:33:29.585397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.533 [2024-07-26 18:33:29.585800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-26 18:33:29.585828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.533 [2024-07-26 18:33:29.585844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.533 [2024-07-26 18:33:29.586102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.533 [2024-07-26 18:33:29.586329] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.533 [2024-07-26 18:33:29.586352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.533 [2024-07-26 18:33:29.586366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.533 [2024-07-26 18:33:29.589497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-26 18:33:29.598676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.533 [2024-07-26 18:33:29.599081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-26 18:33:29.599121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.533 [2024-07-26 18:33:29.599138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.533 [2024-07-26 18:33:29.599394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.533 [2024-07-26 18:33:29.599589] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.533 [2024-07-26 18:33:29.599610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.533 [2024-07-26 18:33:29.599623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.533 [2024-07-26 18:33:29.602609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-26 18:33:29.611898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.533 [2024-07-26 18:33:29.612306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-26 18:33:29.612336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.533 [2024-07-26 18:33:29.612368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.533 [2024-07-26 18:33:29.612616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.533 [2024-07-26 18:33:29.612811] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.533 [2024-07-26 18:33:29.612835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.533 [2024-07-26 18:33:29.612848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.533 [2024-07-26 18:33:29.615846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-26 18:33:29.625187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.533 [2024-07-26 18:33:29.625671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-26 18:33:29.625699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.533 [2024-07-26 18:33:29.625715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.533 [2024-07-26 18:33:29.625933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.533 [2024-07-26 18:33:29.626199] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.533 [2024-07-26 18:33:29.626224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.533 [2024-07-26 18:33:29.626239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.533 [2024-07-26 18:33:29.629225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-26 18:33:29.638517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.533 [2024-07-26 18:33:29.638920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-26 18:33:29.638948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.533 [2024-07-26 18:33:29.638964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.533 [2024-07-26 18:33:29.639209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.533 [2024-07-26 18:33:29.639425] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.533 [2024-07-26 18:33:29.639446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.533 [2024-07-26 18:33:29.639459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.533 [2024-07-26 18:33:29.642423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-26 18:33:29.651778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.533 [2024-07-26 18:33:29.652148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-26 18:33:29.652177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.533 [2024-07-26 18:33:29.652193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.533 [2024-07-26 18:33:29.652423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.533 [2024-07-26 18:33:29.652634] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.533 [2024-07-26 18:33:29.652655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.533 [2024-07-26 18:33:29.652669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.533 [2024-07-26 18:33:29.655742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-26 18:33:29.665096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.533 [2024-07-26 18:33:29.665565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-26 18:33:29.665594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.533 [2024-07-26 18:33:29.665611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.533 [2024-07-26 18:33:29.665862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.533 [2024-07-26 18:33:29.666083] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.533 [2024-07-26 18:33:29.666121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.533 [2024-07-26 18:33:29.666136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.533 [2024-07-26 18:33:29.669198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.794 [2024-07-26 18:33:29.678378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.794 [2024-07-26 18:33:29.678774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.794 [2024-07-26 18:33:29.678801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.794 [2024-07-26 18:33:29.678816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.794 [2024-07-26 18:33:29.679048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.794 [2024-07-26 18:33:29.679307] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.794 [2024-07-26 18:33:29.679331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.794 [2024-07-26 18:33:29.679347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.794 [2024-07-26 18:33:29.682543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.794 [2024-07-26 18:33:29.691651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.794 [2024-07-26 18:33:29.692115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.794 [2024-07-26 18:33:29.692144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.794 [2024-07-26 18:33:29.692161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.794 [2024-07-26 18:33:29.692416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.794 [2024-07-26 18:33:29.692611] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.794 [2024-07-26 18:33:29.692631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.794 [2024-07-26 18:33:29.692644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.794 [2024-07-26 18:33:29.695612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.794 [2024-07-26 18:33:29.704895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.794 [2024-07-26 18:33:29.705278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.794 [2024-07-26 18:33:29.705306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.794 [2024-07-26 18:33:29.705322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.794 [2024-07-26 18:33:29.705567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.794 [2024-07-26 18:33:29.705777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.794 [2024-07-26 18:33:29.705798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.794 [2024-07-26 18:33:29.705812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.794 [2024-07-26 18:33:29.708976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.794 [2024-07-26 18:33:29.718906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.794 [2024-07-26 18:33:29.719336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.794 [2024-07-26 18:33:29.719368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.794 [2024-07-26 18:33:29.719386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.794 [2024-07-26 18:33:29.719626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.794 [2024-07-26 18:33:29.719871] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.794 [2024-07-26 18:33:29.719896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.795 [2024-07-26 18:33:29.719913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.795 [2024-07-26 18:33:29.723488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.795 [2024-07-26 18:33:29.732790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.795 [2024-07-26 18:33:29.733252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.795 [2024-07-26 18:33:29.733280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.795 [2024-07-26 18:33:29.733296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.795 [2024-07-26 18:33:29.733538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.795 [2024-07-26 18:33:29.733783] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.795 [2024-07-26 18:33:29.733809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.795 [2024-07-26 18:33:29.733825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.795 [2024-07-26 18:33:29.737417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.795 [2024-07-26 18:33:29.746717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.795 [2024-07-26 18:33:29.747181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.795 [2024-07-26 18:33:29.747211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.795 [2024-07-26 18:33:29.747227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.795 [2024-07-26 18:33:29.747462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.795 [2024-07-26 18:33:29.747721] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.795 [2024-07-26 18:33:29.747747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.795 [2024-07-26 18:33:29.747769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.795 [2024-07-26 18:33:29.751362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.795 [2024-07-26 18:33:29.760667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.795 [2024-07-26 18:33:29.761128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.795 [2024-07-26 18:33:29.761157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.795 [2024-07-26 18:33:29.761173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.795 [2024-07-26 18:33:29.761424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.795 [2024-07-26 18:33:29.761670] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.795 [2024-07-26 18:33:29.761696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.795 [2024-07-26 18:33:29.761712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.795 [2024-07-26 18:33:29.765308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.795 [2024-07-26 18:33:29.774637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.795 [2024-07-26 18:33:29.775166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.795 [2024-07-26 18:33:29.775200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.795 [2024-07-26 18:33:29.775218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.795 [2024-07-26 18:33:29.775460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.795 [2024-07-26 18:33:29.775705] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.795 [2024-07-26 18:33:29.775731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.795 [2024-07-26 18:33:29.775747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.795 [2024-07-26 18:33:29.779341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.795 [2024-07-26 18:33:29.788646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.795 [2024-07-26 18:33:29.789105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.795 [2024-07-26 18:33:29.789138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.795 [2024-07-26 18:33:29.789157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.795 [2024-07-26 18:33:29.789398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.795 [2024-07-26 18:33:29.789643] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.795 [2024-07-26 18:33:29.789669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.795 [2024-07-26 18:33:29.789686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.795 [2024-07-26 18:33:29.793281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.795 [2024-07-26 18:33:29.802586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.795 [2024-07-26 18:33:29.803030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.795 [2024-07-26 18:33:29.803118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.795 [2024-07-26 18:33:29.803140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.795 [2024-07-26 18:33:29.803407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.795 [2024-07-26 18:33:29.803656] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.795 [2024-07-26 18:33:29.803683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.795 [2024-07-26 18:33:29.803699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.795 [2024-07-26 18:33:29.807295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.795 [2024-07-26 18:33:29.816603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.795 [2024-07-26 18:33:29.817045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.795 [2024-07-26 18:33:29.817087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.795 [2024-07-26 18:33:29.817106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.795 [2024-07-26 18:33:29.817346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.795 [2024-07-26 18:33:29.817591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.795 [2024-07-26 18:33:29.817616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.795 [2024-07-26 18:33:29.817632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.795 [2024-07-26 18:33:29.821227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.795 [2024-07-26 18:33:29.830540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.795 [2024-07-26 18:33:29.830968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.795 [2024-07-26 18:33:29.831000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.795 [2024-07-26 18:33:29.831019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.795 [2024-07-26 18:33:29.831273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.795 [2024-07-26 18:33:29.831519] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.795 [2024-07-26 18:33:29.831545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.795 [2024-07-26 18:33:29.831561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.795 [2024-07-26 18:33:29.835150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.795 [2024-07-26 18:33:29.844458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.795 [2024-07-26 18:33:29.844920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.795 [2024-07-26 18:33:29.844948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.795 [2024-07-26 18:33:29.844964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.795 [2024-07-26 18:33:29.845230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.795 [2024-07-26 18:33:29.845484] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.795 [2024-07-26 18:33:29.845510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.795 [2024-07-26 18:33:29.845527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.795 [2024-07-26 18:33:29.849117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.795 [2024-07-26 18:33:29.858419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.795 [2024-07-26 18:33:29.858878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.795 [2024-07-26 18:33:29.858905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.795 [2024-07-26 18:33:29.858920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.795 [2024-07-26 18:33:29.859177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.795 [2024-07-26 18:33:29.859434] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.796 [2024-07-26 18:33:29.859461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.796 [2024-07-26 18:33:29.859477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.796 [2024-07-26 18:33:29.863069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.796 [2024-07-26 18:33:29.872395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.796 [2024-07-26 18:33:29.872854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.796 [2024-07-26 18:33:29.872886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.796 [2024-07-26 18:33:29.872904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.796 [2024-07-26 18:33:29.873156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.796 [2024-07-26 18:33:29.873402] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.796 [2024-07-26 18:33:29.873428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.796 [2024-07-26 18:33:29.873444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.796 [2024-07-26 18:33:29.877029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.796 [2024-07-26 18:33:29.886345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.796 [2024-07-26 18:33:29.886796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.796 [2024-07-26 18:33:29.886824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.796 [2024-07-26 18:33:29.886840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.796 [2024-07-26 18:33:29.887103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.796 [2024-07-26 18:33:29.887348] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.796 [2024-07-26 18:33:29.887373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.796 [2024-07-26 18:33:29.887390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.796 [2024-07-26 18:33:29.890977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.796 [2024-07-26 18:33:29.900294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.796 [2024-07-26 18:33:29.900738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.796 [2024-07-26 18:33:29.900771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.796 [2024-07-26 18:33:29.900789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.796 [2024-07-26 18:33:29.901029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.796 [2024-07-26 18:33:29.901284] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.796 [2024-07-26 18:33:29.901311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.796 [2024-07-26 18:33:29.901327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.796 [2024-07-26 18:33:29.904911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.796 [2024-07-26 18:33:29.914236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.796 [2024-07-26 18:33:29.914697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.796 [2024-07-26 18:33:29.914725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.796 [2024-07-26 18:33:29.914741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.796 [2024-07-26 18:33:29.914990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.796 [2024-07-26 18:33:29.915247] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.796 [2024-07-26 18:33:29.915274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.796 [2024-07-26 18:33:29.915289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.796 [2024-07-26 18:33:29.918871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.796 [2024-07-26 18:33:29.928189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.796 [2024-07-26 18:33:29.928643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.796 [2024-07-26 18:33:29.928675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:03.796 [2024-07-26 18:33:29.928693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:03.796 [2024-07-26 18:33:29.928932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:03.796 [2024-07-26 18:33:29.929192] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.796 [2024-07-26 18:33:29.929219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.796 [2024-07-26 18:33:29.929235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.796 [2024-07-26 18:33:29.932796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.057 [2024-07-26 18:33:29.942102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.057 [2024-07-26 18:33:29.942601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.057 [2024-07-26 18:33:29.942629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.057 [2024-07-26 18:33:29.942650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.057 [2024-07-26 18:33:29.942884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.057 [2024-07-26 18:33:29.943143] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.057 [2024-07-26 18:33:29.943170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.057 [2024-07-26 18:33:29.943187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.057 [2024-07-26 18:33:29.946779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.057 [2024-07-26 18:33:29.956097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.058 [2024-07-26 18:33:29.956537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.058 [2024-07-26 18:33:29.956570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.058 [2024-07-26 18:33:29.956588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.058 [2024-07-26 18:33:29.956829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.058 [2024-07-26 18:33:29.957088] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.058 [2024-07-26 18:33:29.957114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.058 [2024-07-26 18:33:29.957131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.058 [2024-07-26 18:33:29.960714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.058 [2024-07-26 18:33:29.970033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.058 [2024-07-26 18:33:29.970478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.058 [2024-07-26 18:33:29.970510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.058 [2024-07-26 18:33:29.970528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.058 [2024-07-26 18:33:29.970769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.058 [2024-07-26 18:33:29.971013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.058 [2024-07-26 18:33:29.971038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.058 [2024-07-26 18:33:29.971055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.058 [2024-07-26 18:33:29.974651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.058 [2024-07-26 18:33:29.983956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.058 [2024-07-26 18:33:29.984417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.058 [2024-07-26 18:33:29.984449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.058 [2024-07-26 18:33:29.984466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.058 [2024-07-26 18:33:29.984707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.058 [2024-07-26 18:33:29.984951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.058 [2024-07-26 18:33:29.984981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.058 [2024-07-26 18:33:29.984998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.058 [2024-07-26 18:33:29.988590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.058 [2024-07-26 18:33:29.997891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.058 [2024-07-26 18:33:29.998349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.058 [2024-07-26 18:33:29.998381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.058 [2024-07-26 18:33:29.998400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.058 [2024-07-26 18:33:29.998640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.058 [2024-07-26 18:33:29.998884] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.058 [2024-07-26 18:33:29.998910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.058 [2024-07-26 18:33:29.998926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.058 [2024-07-26 18:33:30.002537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.058 [2024-07-26 18:33:30.011899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.058 [2024-07-26 18:33:30.012369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.058 [2024-07-26 18:33:30.012406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.058 [2024-07-26 18:33:30.012427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.058 [2024-07-26 18:33:30.012671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.058 [2024-07-26 18:33:30.012917] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.058 [2024-07-26 18:33:30.012943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.058 [2024-07-26 18:33:30.012960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.058 [2024-07-26 18:33:30.016554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.058 [2024-07-26 18:33:30.026046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.058 [2024-07-26 18:33:30.026535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.058 [2024-07-26 18:33:30.026569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.058 [2024-07-26 18:33:30.026588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.058 [2024-07-26 18:33:30.026829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.058 [2024-07-26 18:33:30.027087] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.058 [2024-07-26 18:33:30.027114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.058 [2024-07-26 18:33:30.027131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.058 [2024-07-26 18:33:30.030733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.058 [2024-07-26 18:33:30.040067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.058 [2024-07-26 18:33:30.040634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.058 [2024-07-26 18:33:30.040686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.058 [2024-07-26 18:33:30.040704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.058 [2024-07-26 18:33:30.040945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.058 [2024-07-26 18:33:30.041200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.058 [2024-07-26 18:33:30.041227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.058 [2024-07-26 18:33:30.041244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.058 [2024-07-26 18:33:30.044824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.058 [2024-07-26 18:33:30.053932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.058 [2024-07-26 18:33:30.054379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.058 [2024-07-26 18:33:30.054412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.058 [2024-07-26 18:33:30.054431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.058 [2024-07-26 18:33:30.054670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.058 [2024-07-26 18:33:30.054916] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.058 [2024-07-26 18:33:30.054942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.058 [2024-07-26 18:33:30.054959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.058 [2024-07-26 18:33:30.058852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.058 [2024-07-26 18:33:30.067868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.058 [2024-07-26 18:33:30.068371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.058 [2024-07-26 18:33:30.068400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.058 [2024-07-26 18:33:30.068417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.058 [2024-07-26 18:33:30.068665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.058 [2024-07-26 18:33:30.068911] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.058 [2024-07-26 18:33:30.068937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.058 [2024-07-26 18:33:30.068952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.058 [2024-07-26 18:33:30.072562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.058 [2024-07-26 18:33:30.081887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.058 [2024-07-26 18:33:30.082335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.058 [2024-07-26 18:33:30.082367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.058 [2024-07-26 18:33:30.082386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.058 [2024-07-26 18:33:30.082631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.058 [2024-07-26 18:33:30.082878] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.058 [2024-07-26 18:33:30.082903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.058 [2024-07-26 18:33:30.082919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.058 [2024-07-26 18:33:30.086515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.059 [2024-07-26 18:33:30.096156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.059 [2024-07-26 18:33:30.096656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.059 [2024-07-26 18:33:30.096687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.059 [2024-07-26 18:33:30.096705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.059 [2024-07-26 18:33:30.096950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.059 [2024-07-26 18:33:30.097191] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.059 [2024-07-26 18:33:30.097216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.059 [2024-07-26 18:33:30.097231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.059 [2024-07-26 18:33:30.100380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.059 [2024-07-26 18:33:30.110084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.059 [2024-07-26 18:33:30.110608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.059 [2024-07-26 18:33:30.110640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.059 [2024-07-26 18:33:30.110658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.059 [2024-07-26 18:33:30.110899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.059 [2024-07-26 18:33:30.111157] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.059 [2024-07-26 18:33:30.111183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.059 [2024-07-26 18:33:30.111201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.059 [2024-07-26 18:33:30.114788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.059 [2024-07-26 18:33:30.124136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.059 [2024-07-26 18:33:30.124587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.059 [2024-07-26 18:33:30.124620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.059 [2024-07-26 18:33:30.124639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.059 [2024-07-26 18:33:30.124880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.059 [2024-07-26 18:33:30.125139] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.059 [2024-07-26 18:33:30.125165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.059 [2024-07-26 18:33:30.125188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.059 [2024-07-26 18:33:30.128776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.059 [2024-07-26 18:33:30.138104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.059 [2024-07-26 18:33:30.138619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.059 [2024-07-26 18:33:30.138647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.059 [2024-07-26 18:33:30.138663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.059 [2024-07-26 18:33:30.138923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.059 [2024-07-26 18:33:30.139181] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.059 [2024-07-26 18:33:30.139207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.059 [2024-07-26 18:33:30.139224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.059 [2024-07-26 18:33:30.142815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.059 [2024-07-26 18:33:30.152151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.059 [2024-07-26 18:33:30.152583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.059 [2024-07-26 18:33:30.152615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.059 [2024-07-26 18:33:30.152633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.059 [2024-07-26 18:33:30.152873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.059 [2024-07-26 18:33:30.153129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.059 [2024-07-26 18:33:30.153156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.059 [2024-07-26 18:33:30.153173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.059 [2024-07-26 18:33:30.156760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.059 [2024-07-26 18:33:30.166094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.059 [2024-07-26 18:33:30.166516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.059 [2024-07-26 18:33:30.166547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.059 [2024-07-26 18:33:30.166565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.059 [2024-07-26 18:33:30.166805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.059 [2024-07-26 18:33:30.167072] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.059 [2024-07-26 18:33:30.167100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.059 [2024-07-26 18:33:30.167117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.059 [2024-07-26 18:33:30.170700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.059 [2024-07-26 18:33:30.180023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.059 [2024-07-26 18:33:30.180509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.059 [2024-07-26 18:33:30.180540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.059 [2024-07-26 18:33:30.180559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.059 [2024-07-26 18:33:30.180799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.059 [2024-07-26 18:33:30.181044] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.059 [2024-07-26 18:33:30.181080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.059 [2024-07-26 18:33:30.181098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.059 [2024-07-26 18:33:30.184690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.059 [2024-07-26 18:33:30.194008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.059 [2024-07-26 18:33:30.194435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.059 [2024-07-26 18:33:30.194467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.059 [2024-07-26 18:33:30.194486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.059 [2024-07-26 18:33:30.194726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.059 [2024-07-26 18:33:30.194970] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.059 [2024-07-26 18:33:30.194996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.059 [2024-07-26 18:33:30.195012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.059 [2024-07-26 18:33:30.198612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.319 [2024-07-26 18:33:30.207946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.319 [2024-07-26 18:33:30.208376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.319 [2024-07-26 18:33:30.208408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.319 [2024-07-26 18:33:30.208426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.319 [2024-07-26 18:33:30.208666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.319 [2024-07-26 18:33:30.208911] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.319 [2024-07-26 18:33:30.208936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.319 [2024-07-26 18:33:30.208951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.319 [2024-07-26 18:33:30.212554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.319 [2024-07-26 18:33:30.221877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.319 [2024-07-26 18:33:30.222314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.319 [2024-07-26 18:33:30.222347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.319 [2024-07-26 18:33:30.222366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.319 [2024-07-26 18:33:30.222606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.319 [2024-07-26 18:33:30.222858] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.319 [2024-07-26 18:33:30.222885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.319 [2024-07-26 18:33:30.222901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.319 [2024-07-26 18:33:30.226499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.319 [2024-07-26 18:33:30.235822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.319 [2024-07-26 18:33:30.236277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.319 [2024-07-26 18:33:30.236308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.319 [2024-07-26 18:33:30.236326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.319 [2024-07-26 18:33:30.236566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.319 [2024-07-26 18:33:30.236811] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.319 [2024-07-26 18:33:30.236836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.319 [2024-07-26 18:33:30.236852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.319 [2024-07-26 18:33:30.240452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.319 [2024-07-26 18:33:30.249786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.319 [2024-07-26 18:33:30.250245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.319 [2024-07-26 18:33:30.250277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.319 [2024-07-26 18:33:30.250295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.319 [2024-07-26 18:33:30.250535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.319 [2024-07-26 18:33:30.250780] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.319 [2024-07-26 18:33:30.250804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.319 [2024-07-26 18:33:30.250820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.319 [2024-07-26 18:33:30.254494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.319 [2024-07-26 18:33:30.263824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.319 [2024-07-26 18:33:30.264258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.319 [2024-07-26 18:33:30.264290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.319 [2024-07-26 18:33:30.264308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.319 [2024-07-26 18:33:30.264549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.319 [2024-07-26 18:33:30.264803] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.319 [2024-07-26 18:33:30.264828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.320 [2024-07-26 18:33:30.264844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.320 [2024-07-26 18:33:30.268484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.320 [2024-07-26 18:33:30.277822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.320 [2024-07-26 18:33:30.278260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.320 [2024-07-26 18:33:30.278293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.320 [2024-07-26 18:33:30.278311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.320 [2024-07-26 18:33:30.278560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.320 [2024-07-26 18:33:30.278805] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.320 [2024-07-26 18:33:30.278830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.320 [2024-07-26 18:33:30.278845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.320 [2024-07-26 18:33:30.282446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.320 [2024-07-26 18:33:30.291780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.320 [2024-07-26 18:33:30.292200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.320 [2024-07-26 18:33:30.292231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.320 [2024-07-26 18:33:30.292249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.320 [2024-07-26 18:33:30.292490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.320 [2024-07-26 18:33:30.292735] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.320 [2024-07-26 18:33:30.292760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.320 [2024-07-26 18:33:30.292776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.320 [2024-07-26 18:33:30.296375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.320 [2024-07-26 18:33:30.305706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.320 [2024-07-26 18:33:30.306112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.320 [2024-07-26 18:33:30.306144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.320 [2024-07-26 18:33:30.306162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.320 [2024-07-26 18:33:30.306403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.320 [2024-07-26 18:33:30.306648] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.320 [2024-07-26 18:33:30.306672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.320 [2024-07-26 18:33:30.306688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.320 [2024-07-26 18:33:30.310295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.320 [2024-07-26 18:33:30.319622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.320 [2024-07-26 18:33:30.320072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.320 [2024-07-26 18:33:30.320104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.320 [2024-07-26 18:33:30.320128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.320 [2024-07-26 18:33:30.320369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.320 [2024-07-26 18:33:30.320614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.320 [2024-07-26 18:33:30.320639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.320 [2024-07-26 18:33:30.320654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.320 [2024-07-26 18:33:30.324245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.320 [2024-07-26 18:33:30.333580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.320 [2024-07-26 18:33:30.334037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.320 [2024-07-26 18:33:30.334078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.320 [2024-07-26 18:33:30.334098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.320 [2024-07-26 18:33:30.334348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.320 [2024-07-26 18:33:30.334593] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.320 [2024-07-26 18:33:30.334617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.320 [2024-07-26 18:33:30.334634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.320 [2024-07-26 18:33:30.338231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.320 [2024-07-26 18:33:30.347553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.320 [2024-07-26 18:33:30.348008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.320 [2024-07-26 18:33:30.348039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.320 [2024-07-26 18:33:30.348057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.320 [2024-07-26 18:33:30.348309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.320 [2024-07-26 18:33:30.348566] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.320 [2024-07-26 18:33:30.348591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.320 [2024-07-26 18:33:30.348607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.320 [2024-07-26 18:33:30.352205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1615981 Killed "${NVMF_APP[@]}" "$@" 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:04.320 [2024-07-26 18:33:30.361540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.320 [2024-07-26 18:33:30.362032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.320 [2024-07-26 18:33:30.362097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.320 [2024-07-26 18:33:30.362118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.320 [2024-07-26 18:33:30.362357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.320 [2024-07-26 18:33:30.362602] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.320 [2024-07-26 18:33:30.362627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.320 [2024-07-26 18:33:30.362644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1617040 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1617040 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1617040 ']' 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:04.320 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:04.320 [2024-07-26 18:33:30.366251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.320 [2024-07-26 18:33:30.375588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.320 [2024-07-26 18:33:30.376038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.320 [2024-07-26 18:33:30.376077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.320 [2024-07-26 18:33:30.376098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.320 [2024-07-26 18:33:30.376338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.320 [2024-07-26 18:33:30.376582] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.320 [2024-07-26 18:33:30.376607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.320 [2024-07-26 18:33:30.376624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.320 [2024-07-26 18:33:30.380218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.320 [2024-07-26 18:33:30.389534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.320 [2024-07-26 18:33:30.389980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.320 [2024-07-26 18:33:30.390012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.320 [2024-07-26 18:33:30.390030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.321 [2024-07-26 18:33:30.390280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.321 [2024-07-26 18:33:30.390532] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.321 [2024-07-26 18:33:30.390557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.321 [2024-07-26 18:33:30.390573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.321 [2024-07-26 18:33:30.394158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.321 [2024-07-26 18:33:30.403463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.321 [2024-07-26 18:33:30.403896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.321 [2024-07-26 18:33:30.403928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.321 [2024-07-26 18:33:30.403946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.321 [2024-07-26 18:33:30.404196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.321 [2024-07-26 18:33:30.404443] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.321 [2024-07-26 18:33:30.404467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.321 [2024-07-26 18:33:30.404483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.321 [2024-07-26 18:33:30.408076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.321 [2024-07-26 18:33:30.414336] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:04.321 [2024-07-26 18:33:30.414406] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:04.321 [2024-07-26 18:33:30.417385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.321 [2024-07-26 18:33:30.417813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.321 [2024-07-26 18:33:30.417845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.321 [2024-07-26 18:33:30.417863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.321 [2024-07-26 18:33:30.418113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.321 [2024-07-26 18:33:30.418359] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.321 [2024-07-26 18:33:30.418384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.321 [2024-07-26 18:33:30.418401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.321 [2024-07-26 18:33:30.421984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.321 [2024-07-26 18:33:30.431303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.321 [2024-07-26 18:33:30.431761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.321 [2024-07-26 18:33:30.431792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.321 [2024-07-26 18:33:30.431810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.321 [2024-07-26 18:33:30.432051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.321 [2024-07-26 18:33:30.432306] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.321 [2024-07-26 18:33:30.432336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.321 [2024-07-26 18:33:30.432353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.321 [2024-07-26 18:33:30.435935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.321 [2024-07-26 18:33:30.445215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.321 [2024-07-26 18:33:30.445641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.321 [2024-07-26 18:33:30.445673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.321 [2024-07-26 18:33:30.445691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.321 [2024-07-26 18:33:30.445931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.321 [2024-07-26 18:33:30.446186] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.321 [2024-07-26 18:33:30.446211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.321 [2024-07-26 18:33:30.446227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.321 [2024-07-26 18:33:30.449811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.321 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.321 [2024-07-26 18:33:30.459131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.321 [2024-07-26 18:33:30.459586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.321 [2024-07-26 18:33:30.459617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.321 [2024-07-26 18:33:30.459635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.321 [2024-07-26 18:33:30.459883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.321 [2024-07-26 18:33:30.460141] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.321 [2024-07-26 18:33:30.460166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.321 [2024-07-26 18:33:30.460182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.321 [2024-07-26 18:33:30.460839] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:04.580 [2024-07-26 18:33:30.463764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.580 [2024-07-26 18:33:30.473105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.580 [2024-07-26 18:33:30.473537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.580 [2024-07-26 18:33:30.473568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.580 [2024-07-26 18:33:30.473586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.580 [2024-07-26 18:33:30.473826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.580 [2024-07-26 18:33:30.474086] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.580 [2024-07-26 18:33:30.474111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.581 [2024-07-26 18:33:30.474127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.581 [2024-07-26 18:33:30.477715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.581 [2024-07-26 18:33:30.487024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.581 [2024-07-26 18:33:30.487483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.581 [2024-07-26 18:33:30.487514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.581 [2024-07-26 18:33:30.487532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.581 [2024-07-26 18:33:30.487772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.581 [2024-07-26 18:33:30.488016] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.581 [2024-07-26 18:33:30.488040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.581 [2024-07-26 18:33:30.488056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.581 [2024-07-26 18:33:30.491429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:04.581 [2024-07-26 18:33:30.491656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.581 [2024-07-26 18:33:30.500985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.581 [2024-07-26 18:33:30.501608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.581 [2024-07-26 18:33:30.501650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.581 [2024-07-26 18:33:30.501671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.581 [2024-07-26 18:33:30.501920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.581 [2024-07-26 18:33:30.502179] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.581 [2024-07-26 18:33:30.502205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.581 [2024-07-26 18:33:30.502223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.581 [2024-07-26 18:33:30.505814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.581 [2024-07-26 18:33:30.514940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.581 [2024-07-26 18:33:30.515445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.581 [2024-07-26 18:33:30.515482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.581 [2024-07-26 18:33:30.515502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.581 [2024-07-26 18:33:30.515747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.581 [2024-07-26 18:33:30.515994] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.581 [2024-07-26 18:33:30.516020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.581 [2024-07-26 18:33:30.516038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.581 [2024-07-26 18:33:30.519630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.581 [2024-07-26 18:33:30.528939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.581 [2024-07-26 18:33:30.529387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.581 [2024-07-26 18:33:30.529436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.581 [2024-07-26 18:33:30.529456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.581 [2024-07-26 18:33:30.529697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.581 [2024-07-26 18:33:30.529944] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.581 [2024-07-26 18:33:30.529969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.581 [2024-07-26 18:33:30.529985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.581 [2024-07-26 18:33:30.533575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.581 [2024-07-26 18:33:30.542884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.581 [2024-07-26 18:33:30.543431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.581 [2024-07-26 18:33:30.543470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.581 [2024-07-26 18:33:30.543490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.581 [2024-07-26 18:33:30.543738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.581 [2024-07-26 18:33:30.543986] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.581 [2024-07-26 18:33:30.544011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.581 [2024-07-26 18:33:30.544029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.581 [2024-07-26 18:33:30.547625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.581 [2024-07-26 18:33:30.556950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.581 [2024-07-26 18:33:30.557496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.581 [2024-07-26 18:33:30.557536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.581 [2024-07-26 18:33:30.557557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.581 [2024-07-26 18:33:30.557805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.581 [2024-07-26 18:33:30.558053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.581 [2024-07-26 18:33:30.558090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.581 [2024-07-26 18:33:30.558108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.581 [2024-07-26 18:33:30.561695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.581 [2024-07-26 18:33:30.571022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.581 [2024-07-26 18:33:30.571493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.581 [2024-07-26 18:33:30.571527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.581 [2024-07-26 18:33:30.571546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.581 [2024-07-26 18:33:30.571787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.581 [2024-07-26 18:33:30.572046] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.581 [2024-07-26 18:33:30.572083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.581 [2024-07-26 18:33:30.572101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.581 [2024-07-26 18:33:30.575683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.581 [2024-07-26 18:33:30.584991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.581 [2024-07-26 18:33:30.585441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.581 [2024-07-26 18:33:30.585475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.581 [2024-07-26 18:33:30.585495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.581 [2024-07-26 18:33:30.585628] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:04.581 [2024-07-26 18:33:30.585664] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:04.581 [2024-07-26 18:33:30.585682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:04.581 [2024-07-26 18:33:30.585695] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:04.581 [2024-07-26 18:33:30.585707] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:04.581 [2024-07-26 18:33:30.585736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.581 [2024-07-26 18:33:30.585791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:04.581 [2024-07-26 18:33:30.585845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:04.581 [2024-07-26 18:33:30.585848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.581 [2024-07-26 18:33:30.585982] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.581 [2024-07-26 18:33:30.586005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.581 [2024-07-26 18:33:30.586020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.581 [2024-07-26 18:33:30.589610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.581 [2024-07-26 18:33:30.598938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.581 [2024-07-26 18:33:30.599555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.581 [2024-07-26 18:33:30.599598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.581 [2024-07-26 18:33:30.599618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.581 [2024-07-26 18:33:30.599870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.581 [2024-07-26 18:33:30.600129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.581 [2024-07-26 18:33:30.600156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.581 [2024-07-26 18:33:30.600176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.582 [2024-07-26 18:33:30.603769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.582 [2024-07-26 18:33:30.612899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.582 [2024-07-26 18:33:30.613495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.582 [2024-07-26 18:33:30.613557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.582 [2024-07-26 18:33:30.613579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.582 [2024-07-26 18:33:30.613828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.582 [2024-07-26 18:33:30.614087] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.582 [2024-07-26 18:33:30.614113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.582 [2024-07-26 18:33:30.614132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.582 [2024-07-26 18:33:30.617719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.582 [2024-07-26 18:33:30.626847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.582 [2024-07-26 18:33:30.627471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.582 [2024-07-26 18:33:30.627525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.582 [2024-07-26 18:33:30.627546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.582 [2024-07-26 18:33:30.627801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.582 [2024-07-26 18:33:30.628050] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.582 [2024-07-26 18:33:30.628084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.582 [2024-07-26 18:33:30.628104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.582 [2024-07-26 18:33:30.631704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.582 [2024-07-26 18:33:30.640864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.582 [2024-07-26 18:33:30.641457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.582 [2024-07-26 18:33:30.641504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.582 [2024-07-26 18:33:30.641526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.582 [2024-07-26 18:33:30.641775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.582 [2024-07-26 18:33:30.642023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.582 [2024-07-26 18:33:30.642049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.582 [2024-07-26 18:33:30.642078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.582 [2024-07-26 18:33:30.645670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.582 [2024-07-26 18:33:30.654783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.582 [2024-07-26 18:33:30.655418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.582 [2024-07-26 18:33:30.655460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.582 [2024-07-26 18:33:30.655481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.582 [2024-07-26 18:33:30.655730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.582 [2024-07-26 18:33:30.655991] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.582 [2024-07-26 18:33:30.656018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.582 [2024-07-26 18:33:30.656035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.582 [2024-07-26 18:33:30.659637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.582 [2024-07-26 18:33:30.668777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.582 [2024-07-26 18:33:30.669332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.582 [2024-07-26 18:33:30.669383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.582 [2024-07-26 18:33:30.669405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.582 [2024-07-26 18:33:30.669655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.582 [2024-07-26 18:33:30.669904] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.582 [2024-07-26 18:33:30.669931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.582 [2024-07-26 18:33:30.669948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.582 [2024-07-26 18:33:30.673541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.582 [2024-07-26 18:33:30.682848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.582 [2024-07-26 18:33:30.683265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.582 [2024-07-26 18:33:30.683299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.582 [2024-07-26 18:33:30.683318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.582 [2024-07-26 18:33:30.683561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.582 [2024-07-26 18:33:30.683806] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.582 [2024-07-26 18:33:30.683831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.582 [2024-07-26 18:33:30.683848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.582 [2024-07-26 18:33:30.687447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.582 [2024-07-26 18:33:30.696653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.582 [2024-07-26 18:33:30.697031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.582 [2024-07-26 18:33:30.697068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.582 [2024-07-26 18:33:30.697086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.582 [2024-07-26 18:33:30.697303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.582 [2024-07-26 18:33:30.697539] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.582 [2024-07-26 18:33:30.697562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.582 [2024-07-26 18:33:30.697577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.582 [2024-07-26 18:33:30.700856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.582 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:04.582 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:04.582 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:04.582 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:04.582 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:04.582 [2024-07-26 18:33:30.710534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.582 [2024-07-26 18:33:30.710917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.582 [2024-07-26 18:33:30.710946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.582 [2024-07-26 18:33:30.710963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.582 [2024-07-26 18:33:30.711189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.582 [2024-07-26 18:33:30.711432] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.582 [2024-07-26 18:33:30.711455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.582 [2024-07-26 18:33:30.711469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.582 [2024-07-26 18:33:30.714768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.841 [2024-07-26 18:33:30.724308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.841 [2024-07-26 18:33:30.724729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.841 [2024-07-26 18:33:30.724768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.841 [2024-07-26 18:33:30.724785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:04.841 [2024-07-26 18:33:30.725001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.841 [2024-07-26 18:33:30.725232] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.841 [2024-07-26 18:33:30.725256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:04.841 [2024-07-26 18:33:30.725272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.841 [2024-07-26 18:33:30.728516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.841 [2024-07-26 18:33:30.729872] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:04.841 [2024-07-26 18:33:30.737801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.841 [2024-07-26 18:33:30.738183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.841 [2024-07-26 18:33:30.738212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.841 [2024-07-26 18:33:30.738228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.841 [2024-07-26 18:33:30.738473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.841 [2024-07-26 18:33:30.738675] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.841 [2024-07-26 18:33:30.738695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.841 [2024-07-26 18:33:30.738708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.841 [2024-07-26 18:33:30.742017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.841 [2024-07-26 18:33:30.751484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.841 [2024-07-26 18:33:30.751982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.841 [2024-07-26 18:33:30.752010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.841 [2024-07-26 18:33:30.752026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.841 [2024-07-26 18:33:30.752251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.841 [2024-07-26 18:33:30.752513] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.841 [2024-07-26 18:33:30.752536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.841 [2024-07-26 18:33:30.752550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:04.841 [2024-07-26 18:33:30.755968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.841 [2024-07-26 18:33:30.765196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.841 [2024-07-26 18:33:30.765789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.841 [2024-07-26 18:33:30.765837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.841 [2024-07-26 18:33:30.765856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.841 [2024-07-26 18:33:30.766120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.841 [2024-07-26 18:33:30.766353] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.841 [2024-07-26 18:33:30.766390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.841 [2024-07-26 18:33:30.766411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.841 [2024-07-26 18:33:30.769766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.841 Malloc0 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:04.841 [2024-07-26 18:33:30.778782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.841 [2024-07-26 18:33:30.779307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.841 [2024-07-26 18:33:30.779339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.841 [2024-07-26 18:33:30.779358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.841 [2024-07-26 18:33:30.779605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.841 [2024-07-26 18:33:30.779814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.841 [2024-07-26 18:33:30.779836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.841 [2024-07-26 18:33:30.779851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.841 [2024-07-26 18:33:30.783131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.841 [2024-07-26 18:33:30.792384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.841 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:04.841 [2024-07-26 18:33:30.792789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.841 [2024-07-26 18:33:30.792818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976b50 with addr=10.0.0.2, port=4420 00:33:04.841 [2024-07-26 18:33:30.792834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976b50 is same with the state(5) to be set 00:33:04.841 [2024-07-26 18:33:30.793051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976b50 (9): Bad file descriptor 00:33:04.841 [2024-07-26 18:33:30.793281] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.842 [2024-07-26 18:33:30.793303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.842 [2024-07-26 18:33:30.793318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.842 [2024-07-26 18:33:30.796332] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:04.842 [2024-07-26 18:33:30.796595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.842 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.842 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1616274 00:33:04.842 [2024-07-26 18:33:30.806021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.842 [2024-07-26 18:33:30.849936] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:14.850 00:33:14.850 Latency(us) 00:33:14.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.850 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:14.850 Verification LBA range: start 0x0 length 0x4000 00:33:14.850 Nvme1n1 : 15.01 6580.12 25.70 8970.56 0.00 8205.94 1013.38 18252.99 00:33:14.850 =================================================================================================================== 00:33:14.850 Total : 6580.12 25.70 8970.56 0.00 8205.94 1013.38 18252.99 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:14.850 rmmod nvme_tcp 00:33:14.850 rmmod nvme_fabrics 00:33:14.850 rmmod nvme_keyring 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:14.850 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1617040 ']' 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1617040 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1617040 ']' 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1617040 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1617040 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1617040' 00:33:14.851 killing process with pid 1617040 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1617040 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1617040 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:14.851 18:33:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:16.758 00:33:16.758 real 0m22.335s 00:33:16.758 user 1m0.281s 00:33:16.758 sys 0m4.142s 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.758 ************************************ 00:33:16.758 END TEST nvmf_bdevperf 00:33:16.758 ************************************ 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.758 ************************************ 00:33:16.758 START TEST nvmf_target_disconnect 00:33:16.758 ************************************ 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:16.758 * Looking for test storage... 00:33:16.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.758 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:16.759 18:33:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.667 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:18.668 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:18.668 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:18.668 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:18.668 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:18.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:33:18.668 00:33:18.668 --- 10.0.0.2 ping statistics --- 00:33:18.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.668 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:33:18.668 00:33:18.668 --- 10.0.0.1 ping statistics --- 00:33:18.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.668 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:18.668 ************************************ 00:33:18.668 START TEST nvmf_target_disconnect_tc1 00:33:18.668 ************************************ 00:33:18.668 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:18.669 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.669 [2024-07-26 18:33:44.682775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.669 [2024-07-26 18:33:44.682847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d83e0 with addr=10.0.0.2, port=4420 00:33:18.669 [2024-07-26 18:33:44.682894] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:18.669 [2024-07-26 18:33:44.682914] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:18.669 [2024-07-26 18:33:44.682927] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:18.669 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:18.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:18.669 Initializing NVMe Controllers 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:18.669 00:33:18.669 real 0m0.094s 00:33:18.669 user 0m0.034s 00:33:18.669 sys 0m0.060s 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:18.669 ************************************ 00:33:18.669 END TEST nvmf_target_disconnect_tc1 00:33:18.669 ************************************ 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:18.669 ************************************ 00:33:18.669 START TEST nvmf_target_disconnect_tc2 00:33:18.669 ************************************ 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1620083 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1620083 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1620083 ']' 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:18.669 18:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:18.669 [2024-07-26 18:33:44.789181] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:18.669 [2024-07-26 18:33:44.789263] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.929 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.929 [2024-07-26 18:33:44.829416] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:18.929 [2024-07-26 18:33:44.856054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:18.929 [2024-07-26 18:33:44.947948] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:18.929 [2024-07-26 18:33:44.948007] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:18.929 [2024-07-26 18:33:44.948020] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:18.929 [2024-07-26 18:33:44.948032] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:18.929 [2024-07-26 18:33:44.948065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:18.929 [2024-07-26 18:33:44.948149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:18.929 [2024-07-26 18:33:44.948216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:18.929 [2024-07-26 18:33:44.948265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:18.929 [2024-07-26 18:33:44.948267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:19.189 Malloc0 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:19.189 [2024-07-26 18:33:45.127237] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.189 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:19.190 [2024-07-26 18:33:45.155513] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1620192 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:19.190 18:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:19.190 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.098 18:33:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1620083 00:33:21.098 18:33:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Write completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Write completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Write completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Write completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Write completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Write completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Write completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Read completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Write completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.098 Write completed with error (sct=0, sc=8) 00:33:21.098 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 [2024-07-26 18:33:47.181974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 [2024-07-26 18:33:47.182301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Read completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 Write completed with error (sct=0, sc=8) 00:33:21.099 starting I/O failed 00:33:21.099 [2024-07-26 18:33:47.182626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:21.099 [2024-07-26 18:33:47.182865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-07-26 18:33:47.182896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-07-26 18:33:47.183055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-07-26 18:33:47.183094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.183265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.183291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.183442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.183468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.183630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.183657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.183826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.183853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.184071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.184099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.184245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.184272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.184406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.184434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.184596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.184623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.184769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.184798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.184941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.184969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.185122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.185151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.185293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.185320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.185516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.185543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.185720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.185750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.185897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.185925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.186093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.186131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.186266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.186293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.186469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.186496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.186695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.186722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.186887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.186914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.187117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.187144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.187367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.187424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.187675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.187704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.187910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.187961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.188145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.188172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.188317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.188348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.188542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.188569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.188783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.188832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.189004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.189030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-07-26 18:33:47.189182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-07-26 18:33:47.189209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.189340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.189384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.189572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.189599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.189761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.189788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.189975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.190002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.190142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.190169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.190302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.190338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.190560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.190601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.190801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.190831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.190991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.191020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.191281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.191308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.191525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.191555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.191898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.191950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.192139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.192177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.192321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.192348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.192527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.192557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.192821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.192851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.193036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.193071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.193233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.193259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.193414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.193443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.193634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.193696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.193983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.194015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.194168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.194195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.194332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.194368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.194540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.194567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.194758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.194785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.194917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.194943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-07-26 18:33:47.195113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-07-26 18:33:47.195155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.195307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.195335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.195530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.195559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.195801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.195830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.196039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.196085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.196280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.196306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.196499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.196527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.196692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.196720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Write completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Write completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Write completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Write completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Write completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Write completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Read completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Write completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 Write completed with error (sct=0, sc=8) 00:33:21.102 starting I/O failed 00:33:21.102 [2024-07-26 18:33:47.197069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:21.102 [2024-07-26 18:33:47.197251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.197291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.197469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.197497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.197662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.197689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.197847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.197890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.198056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.198091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.198264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.198291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.198467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.198502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.198823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.198876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.199093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.199121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.199258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.199283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.199531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.199557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.199800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.199852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-07-26 18:33:47.200034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-07-26 18:33:47.200070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.200235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.200262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.200455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.200481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.200775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.200825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.201037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.201074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.201243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.201269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.201412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.201440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.201739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.201791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.201983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.202010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.202159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.202185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.202347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.202375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.202537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.202564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.202731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.202757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.202911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.202938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.203071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.203097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.203878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.203905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.204072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.204099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.204244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.204269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.204405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.204432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.204623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.204649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.204813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.204840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.204975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.205002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.205196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.205223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.205407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.205437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.205732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.205784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.205974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.206000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.206167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.206194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.206349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.206379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.206561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.206588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.206753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.206780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.206966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.206992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.207193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.207221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.207384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-07-26 18:33:47.207411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-07-26 18:33:47.207583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.207609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.207777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.207803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.207977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.208014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.208207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.208234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.208399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.208426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.208590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.208617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.208784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.208811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.208946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.208974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.209121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.209149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.209317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.209344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.209506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.209533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.209849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.209904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.210137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.210165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.210322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.210350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.210528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.210558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.210814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.210856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.211082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.211109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.211278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.211306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.211440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.211467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.211631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.211658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.211817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.211844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.212007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.212035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.212179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.212204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.212392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.212419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-07-26 18:33:47.212582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-07-26 18:33:47.212609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.212737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.212771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.212941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.212969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.213132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.213160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.213324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.213351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.213515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.213546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.213804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.213855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.214043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.214081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.214295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.214354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.214550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.214580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.214726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.214754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.214917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.214945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.215155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.215183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.215345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.215372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.215509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.215536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.215697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.215740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.215919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.215947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.216125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.216168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.216339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.216384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.216582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.216610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.216766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.216811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.216989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.217019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.217210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.217237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.217388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.217420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.217659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.217687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.217833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.217861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.218026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.218054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.218220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.218248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.218414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.218444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.218651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.218681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.218943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.218974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.219185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-07-26 18:33:47.219214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-07-26 18:33:47.219354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.219387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.219581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.219610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.219873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.219901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.220071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.220099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.220308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.220339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.220495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.220520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.220660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.220687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.220854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.220899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.221112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.221141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.221287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.221316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.221515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.221543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.221716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.221744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.221991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.222019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.222223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.222255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.222477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.222504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.222667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.222695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.222906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.222937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.223154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.223182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.223355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.223383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.223572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.223602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.223795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.223821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.224014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.224045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.224246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.224274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.224452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.224480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.224641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.224669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.224804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.224832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.225020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.225048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.225205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.225233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.225446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.225474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.225673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-07-26 18:33:47.225701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-07-26 18:33:47.225955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.225985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.226188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.226217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.226371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.226397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.226570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.226597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.226760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.226788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.226999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.227027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.227204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.227233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.227510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.227537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.227728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.227756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.227960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.227991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.228145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.228181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.228388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.228415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.228611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.228638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.228824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.228852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.229031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.229067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.229300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.229328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.229509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.229537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.229740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.229767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.230012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.230042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.230208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.230240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.230399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.230427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.230633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.230664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.230826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.230853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.231052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.231085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.231276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.231303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.231520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.231551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.231763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.231790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.231995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.232026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.232191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.232222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.232471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.232499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.232701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-07-26 18:33:47.232728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-07-26 18:33:47.232866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.232894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.233086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.233114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.233324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.233355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.233536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.233566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.233771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.233799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.233964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.233991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.234185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.234216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.234430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.234458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.234642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.234672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.234849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.234879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.235078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.235108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.235345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.235376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.235551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.235582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.235768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.235796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.235988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.236016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.236221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.236251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.236444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.236472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.236649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.236680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.236830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.236860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.237044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.237086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.237279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.237307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.237474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.237502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.237688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.237715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-07-26 18:33:47.237863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-07-26 18:33:47.237891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.238099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.238142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.238341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.238370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.238516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.238544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.238748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.238814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.238992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.239024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.239285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.239313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.239523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.239553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.239765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.239793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.239978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.240008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.240233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.240276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.240458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.240487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.240654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.240682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.240881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.240912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.241136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.241165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.241347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.241377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.241535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.241567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.241727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.241755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.241961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.241992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.242204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.242235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.242430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.242457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.242664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.242695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.242901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.242931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.243125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.243154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-07-26 18:33:47.243311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-07-26 18:33:47.243342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.243524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.243555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.243766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.243794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.243932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.243960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.244142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.244171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.244348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.244375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.244516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.244544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.244687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.244716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.244909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.244936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.245149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.245179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.245365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.245395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.245579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.245606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.245789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.245824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.246009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.246040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.246233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.246260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.246426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.246454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.246670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.246700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.246950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.246977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.247190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.247222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.247426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.247456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.247644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.247672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.247863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.247891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.248072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.248103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.248290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.248318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.248482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.248509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.248702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.248730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.248880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.248907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.249069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.249098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.249292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.249321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.249490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.249518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.249664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.249694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.249848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.249879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.250064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.250093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-07-26 18:33:47.250278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-07-26 18:33:47.250307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.250507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.250535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.250724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.250751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.250891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.250919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.251127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.251185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.251404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.251433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.251631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.251662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.251997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.252049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.252245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.252273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.252456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.252487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.252831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.252884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.253096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.253124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.253277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.253314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.253505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.253533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.253673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.253701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.253883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.253914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.254106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.254135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.254275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.254303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.254492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.254519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.254720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.254752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.254930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.254958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.255126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.255154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.255342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.255374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.255537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.255565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.255705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.255731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.255893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.255920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.256112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.256139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.256280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.256308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.256522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.256549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.256709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.256735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.256894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.256938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.257131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.257158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.257323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.257350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.257521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-07-26 18:33:47.257548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-07-26 18:33:47.257708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.257735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.257897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.257924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.258054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.258085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.258245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.258289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.258469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.258496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.258656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.258684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.258838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.258868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.259043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.259078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.259267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.259296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.259446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.259476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.259661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.259688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.259854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.259882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.260069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.260112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.260285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.260314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.260527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.260558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.260881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.260932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.261085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.261113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.261306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.261334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.261473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.261501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.261660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.261688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.261854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.261881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.262022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.262050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.262229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.262257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.262465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.262496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.262702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.262732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.262891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.262924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.263090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.263119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.263301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.263328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.263465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.263493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.263684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.263715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.263859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.263890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.264098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.264127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.264317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.264345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-07-26 18:33:47.264529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-07-26 18:33:47.264556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.264733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.264761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.264896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.264937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.265138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.265170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.265376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.265402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.265578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.265606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.265783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.265815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.265963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.265995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.266215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.266247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.266412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.266440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.266577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.266604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.266779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.266822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.267053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.267090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.267362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.267390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.267631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.267658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.267822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.267849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.268081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.268123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.268336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.268366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.268529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.268560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.268749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.268777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.268965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.268993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.269214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.269245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.269404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.269432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.269635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.269662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.269829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.269856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.270089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.270118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.270308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.270338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.270526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.270571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.270757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.270786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.270967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.271021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.271217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.271245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.271407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.271434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.271595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.271645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-07-26 18:33:47.271922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-07-26 18:33:47.271973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.272168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.272196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.272348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.272378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.272616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.272670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.272858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.272885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.273039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.273072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.273260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.273289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.273496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.273522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.273720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.273750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.273938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.273966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.274160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.274186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.274350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.274395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.274692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.274743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.275049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.275083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.275237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.275265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.275406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.275434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.275579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.275606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.275826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.275856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.276055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.276105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.276307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.276334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.276521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.276552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.276725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.276754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.276936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.276963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.277139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.277170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.277375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.277405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.277590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.277617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.277767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.277794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.277981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.393 [2024-07-26 18:33:47.278009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.393 qpair failed and we were unable to recover it. 00:33:21.393 [2024-07-26 18:33:47.278174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.278202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.278362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.278391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.278572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.278602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.278764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.278791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.278951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.278995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.279237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.279279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.279535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.279587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.279812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.279840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.279996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.280026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.280233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.280264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.280485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.280513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.280702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.280735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.280902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.280931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.281198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.281229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.281406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.281437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.281591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.281619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.281762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.281789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.281986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.282014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.282216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.282248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.282462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.282490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.282732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.282760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.282929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.282957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.283127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.283173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.283360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.283387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.283599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.283629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.283904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.283931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.284097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.284124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.284333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.284360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.284541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.284572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.284830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.284882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.285070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.285101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.285262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.285289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.394 qpair failed and we were unable to recover it. 00:33:21.394 [2024-07-26 18:33:47.285474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.394 [2024-07-26 18:33:47.285504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.285692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.285721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.285892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.285919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.286047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.286078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.286262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.286292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.286499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.286526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.286721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.286753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.286915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.286942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.287152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.287183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.287374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.287401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.287591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.287621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.287813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.287841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.288017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.288045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.288256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.288304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.288476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.288504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.288674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.288701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.288861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.288887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.289024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.289051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.289215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.289242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.289408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.289440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.289602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.289632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.289807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.289837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.289991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.290021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.290211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.290238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.290427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.290457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.290751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.290804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.290996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.291026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.291189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.291215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.291397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.291427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.291732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.291794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.292007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.292034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.292179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.292205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.292399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.292426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.292634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.395 [2024-07-26 18:33:47.292664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.395 qpair failed and we were unable to recover it. 00:33:21.395 [2024-07-26 18:33:47.292843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.292873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.293050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.293092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.293226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.293251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.293409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.293438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.293766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.293819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.294026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.294054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.294254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.294282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.294489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.294519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.294756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.294786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.294965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.294992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.295159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.295186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.295390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.295417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.295597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.295633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.295800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.295827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.295988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.296015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.296218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.296246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.296389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.296416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.296603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.296630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.296819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.296849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.297035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.297071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.297264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.297291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.297477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.297505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.297671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.297699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.297869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.297896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.298073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.298104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.298290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.298317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.298489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.298516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.298658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.298685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.298819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.298845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.299117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.299144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.299305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.299332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.299486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.299512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.396 [2024-07-26 18:33:47.299701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.396 [2024-07-26 18:33:47.299728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.396 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.299889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.299916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.300077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.300104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.300320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.300349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.300544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.300572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.300740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.300767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.300935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.300963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.301104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.301130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.301296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.301323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.301520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.301548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.301731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.301761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.301936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.301967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.302150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.302180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.302363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.302389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.302565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.302595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.302772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.302802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.302994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.303021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.303221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.303249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.303449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.303476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.303667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.303693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.303828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.303859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.304028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.304055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.304197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.304222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.304394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.304425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.304681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.304734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.304922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.304949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.305108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.305138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.305314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.305344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.305526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.305557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.305763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.305790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.305972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.306003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.306207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.306238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.306456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.306483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.306670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.397 [2024-07-26 18:33:47.306697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.397 qpair failed and we were unable to recover it. 00:33:21.397 [2024-07-26 18:33:47.306842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.306869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.307057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.307090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.307290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.307316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.307484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.307511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.307699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.307729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.307947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.307977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.308171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.308200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.308346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.308373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.308537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.308564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.308788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.308815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.309017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.309044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.309212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.309239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.309404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.309448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.309610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.309640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.309850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.309878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.310039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.310075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.310262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.310292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.310494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.310524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.310843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.310889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.311081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.311110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.311292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.311320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.311501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.311531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.311738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.311768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.311935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.311962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.312152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.312180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.312400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.312430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.312639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.312692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.312850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.398 [2024-07-26 18:33:47.312877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.398 qpair failed and we were unable to recover it. 00:33:21.398 [2024-07-26 18:33:47.313073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.313099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.313324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.313350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.313537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.313596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.313799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.313826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.313984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.314014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.314208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.314236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.314487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.314545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.314749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.314777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.314962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.314993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.315172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.315202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.315381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.315411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.315622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.315650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.315835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.315866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.316075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.316105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.316383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.316437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.316652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.316679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.316913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.316940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.317130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.317157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.317365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.317392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.317530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.317558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.317745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.317774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.317953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.317983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.318201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.318229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.318372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.318400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.318606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.318636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.318822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.318852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.319036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.319072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.319236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.319263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.319403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.319431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.319621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.319666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.319846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.319876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.320054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.320087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.320282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.399 [2024-07-26 18:33:47.320312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.399 qpair failed and we were unable to recover it. 00:33:21.399 [2024-07-26 18:33:47.320481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.320510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.320683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.320712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.320922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.320950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.321165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.321196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.321350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.321379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.321601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.321658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.321818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.321844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.322032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.322066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.322291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.322321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.322624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.322677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.322884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.322910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.323125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.323155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.323311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.323340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.323491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.323521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.323701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.323729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.323888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.323914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.324108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.324137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.324320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.324348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.324559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.324585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.324739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.324768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.324910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.324938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.325122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.325149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.325341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.325367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.325578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.325608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.325774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.325801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.325934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.325961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.326128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.326156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.326322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.326369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.326557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.326582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.326744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.326770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.326930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.326956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.327122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.327151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.327371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.327401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.400 qpair failed and we were unable to recover it. 00:33:21.400 [2024-07-26 18:33:47.327622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.400 [2024-07-26 18:33:47.327671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.327857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.327884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.328047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.328080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.328246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.328272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.328538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.328567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.328758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.328785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.328920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.328945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.329152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.329181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.329414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.329465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.329645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.329671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.329857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.329886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.330106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.330132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.330320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.330353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.330563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.330590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.330743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.330772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.330968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.330994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.331158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.331188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.331350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.331377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.331513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.331539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.331739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.331767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.331941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.331969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.332190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.332218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.332398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.332428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.332634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.332661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.332843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.332872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.333064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.333090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.333281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.333309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.333520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.333545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.333706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.333731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.333916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.333945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.334159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.334186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.334370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.334399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.334608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.334633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.401 [2024-07-26 18:33:47.334818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.401 [2024-07-26 18:33:47.334843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.401 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.335000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.335029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.335241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.335269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.335577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.335628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.335783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.335809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.336010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.336038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.336226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.336256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.336504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.336531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.336692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.336717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.336853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.336878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.337041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.337080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.337307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.337335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.337486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.337513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.337669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.337711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.337862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.337891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.338092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.338122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.338309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.338335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.338549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.338579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.338771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.338797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.338928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.338958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.339100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.339138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.339307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.339335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.339515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.339544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.339700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.339729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.339918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.339945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.402 [2024-07-26 18:33:47.340113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.402 [2024-07-26 18:33:47.340139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.402 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.340334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.340360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.340644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.340696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.340886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.340913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.341105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.341135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.341313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.341341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.341663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.341712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.341898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.341924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.342111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.342139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.342317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.342346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.342585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.342630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.342842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.342868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.343090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.343118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.343325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.343353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.343508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.343537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.343723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.343749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.343908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.343937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.344158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.344184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.344349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.344391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.344576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.344604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.344769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.344795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.344941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.344969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.345156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.345183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.345381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.345407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.345577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.345602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.345761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.345789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.345959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.345989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.346182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.346209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.346418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.346448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.346630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.346659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.346832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.346862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.347023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.347048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.403 qpair failed and we were unable to recover it. 00:33:21.403 [2024-07-26 18:33:47.347221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.403 [2024-07-26 18:33:47.347248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.347408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.347437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.347610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.347644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.347857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.347884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.348067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.348093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.348307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.348336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.348556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.348607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.348819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.348845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.349057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.349094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.349272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.349301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.349486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.349513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.349704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.349730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.349936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.349965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.350154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.350180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.350380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.350408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.350614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.350640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.350821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.350848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.350989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.351015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.351187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.351214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.351373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.351400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.351565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.351591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.351752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.351778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.351970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.351997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.352142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.352168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.352308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.352334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.352517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.352545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.352733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.352762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.404 qpair failed and we were unable to recover it. 00:33:21.404 [2024-07-26 18:33:47.352969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.404 [2024-07-26 18:33:47.352995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.353147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.353177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.353367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.353393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.353556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.353597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.353778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.353803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.354012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.354041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.354237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.354263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.354439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.354468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.354630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.354655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.354795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.354820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.354981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.355024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.355213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.355243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.355407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.355433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.355599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.355625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.355766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.355810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.355981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.356015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.356202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.356228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.356439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.356467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.356641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.356671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.356846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.356876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.357035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.357068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.357211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.357257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.357460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.357489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.357763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.357824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.358004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.358030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.358213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.405 [2024-07-26 18:33:47.358240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.405 qpair failed and we were unable to recover it. 00:33:21.405 [2024-07-26 18:33:47.358428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.358458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.358637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.358666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.358852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.358878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.359078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.359125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.359293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.359318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.359480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.359505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.359673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.359699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.359882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.359942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.360119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.360149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.360320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.360346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.360488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.360514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.360689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.360715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.360913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.360962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.361138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.361169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.361333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.361359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.361522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.361548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.361713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.361744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.361913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.361939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.362108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.362135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.362436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.362497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.362679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.362709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.362885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.362913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.363096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.363123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.363298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.363325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.363480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.363509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.363829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.363897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.364102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.364129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.364308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.364338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.364566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.364593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.364765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.364791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.365020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.365049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.406 [2024-07-26 18:33:47.365220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.406 [2024-07-26 18:33:47.365247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.406 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.365446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.365475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.365687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.365714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.365903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.365929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.366153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.366180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.366345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.366371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.366515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.366542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.366707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.366733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.366924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.366949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.367090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.367118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.367279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.367306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.367470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.367496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.367701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.367754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.367965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.367994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.368185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.368212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.368374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.368399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.368566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.368592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.368729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.368756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.368897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.368923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.369117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.369144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.369310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.369335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.369497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.369523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.369712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.369739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.369946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.369973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.370166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.370194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.370359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.370386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.370701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.370760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.370976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.371003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.371164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.371191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.371375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.371404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.371662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.371716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.371907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.371935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.372130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.372173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.372363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.372407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.407 qpair failed and we were unable to recover it. 00:33:21.407 [2024-07-26 18:33:47.372597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.407 [2024-07-26 18:33:47.372623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.372796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.372823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.372984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.373011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.373174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.373201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.373443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.373500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.373715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.373741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.373931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.373960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.374177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.374204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.374349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.374375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.374530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.374556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.374699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.374726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.374941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.374970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.375136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.375163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.375331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.375358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.375540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.375571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.375776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.375804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.375989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.376018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.376180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.376207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.376364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.376390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.376557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.376587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.376742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.376771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.376951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.376977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.377154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.377181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.377387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.377416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.377706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.377757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.408 qpair failed and we were unable to recover it. 00:33:21.408 [2024-07-26 18:33:47.377947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.408 [2024-07-26 18:33:47.377973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.378135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.378164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.378368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.378397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.378649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.378702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.378890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.378917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.379084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.379111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.379275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.379301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.379571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.379622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.379807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.379833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.380017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.380046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.380242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.380269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.380539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.380592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.380772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.380798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.380965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.380991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.381180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.381208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.381451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.381503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.381709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.381735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.381913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.381943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.382134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.382161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.382330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.382356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.382581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.382607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.382801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.382836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.383031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.383064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.383243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.383269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.383428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.383454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.383663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.383693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.383884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.383912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.384149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.384178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.384374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.384400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.384569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.384595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.384740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.384766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.384925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.384968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.409 [2024-07-26 18:33:47.385154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.409 [2024-07-26 18:33:47.385181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.409 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.385360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.385389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.385598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.385625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.385895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.385947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.386138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.386164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.386358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.386387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.386583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.386609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.386869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.386922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.387127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.387154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.387373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.387399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.387536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.387579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.387766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.387795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.387972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.387998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.388165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.388192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.388358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.388384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.388645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.388693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.388876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.388901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.389080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.389106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.389302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.389332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.389658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.389720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.390084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.390146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.390315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.390355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.390556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.390582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.390746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.390773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.390914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.390940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.391088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.391115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.391325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.391355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.410 qpair failed and we were unable to recover it. 00:33:21.410 [2024-07-26 18:33:47.391610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.410 [2024-07-26 18:33:47.391638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.391837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.391863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.392015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.392044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.392264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.392295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.392459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.392484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.392646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.392672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.392887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.392916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.393085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.393115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.393285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.393315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.393493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.393519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.393676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.393702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.393880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.393910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.394082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.394123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.394282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.394308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.394445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.394472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.394639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.394666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.394865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.394891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.395038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.395069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.395214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.395240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.395373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.395409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.395598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.395624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.395847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.395873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.396052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.396086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.396268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.396297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.411 [2024-07-26 18:33:47.396442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.411 [2024-07-26 18:33:47.396471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.411 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.396651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.396677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.396883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.396912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.397173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.397202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.397386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.397412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.397577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.397603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.397765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.397795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.397979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.398008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.398181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.398208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.398343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.398369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.398531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.398557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.398717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.398747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.398940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.398968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.399174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.399200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.399392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.399421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.399605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.399631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.399769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.399811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.399997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.400023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.400197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.400223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.400413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.400439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.400761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.400814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.401001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.401027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.401177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.401204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.401420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.401449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.401702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.401754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.401964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.401990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.402147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.402176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.402328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.402357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.402628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.402680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.402895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.402921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.403100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.403129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.403301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.403330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.403521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.412 [2024-07-26 18:33:47.403547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.412 qpair failed and we were unable to recover it. 00:33:21.412 [2024-07-26 18:33:47.403682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.403709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.403856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.403883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.404076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.404106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.404265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.404294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.404480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.404506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.404666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.404693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.404872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.404901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.405087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.405131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.405269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.405295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.405427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.405452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.405637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.405666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.405854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.405881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.406039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.406081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.406249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.406275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.406462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.406496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.406678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.406707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.406871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.406898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.407068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.407113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.407331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.407357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.407541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.407603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.407815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.407841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.408032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.408070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.408226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.408255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.408435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.408464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.408642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.408668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.408845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.408873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.409053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.409093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.409288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.409315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.409486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.409513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.409697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.409725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.409896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.409924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.410088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.410118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.410274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.410300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.413 [2024-07-26 18:33:47.410507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.413 [2024-07-26 18:33:47.410535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.413 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.410724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.410752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.410921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.410950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.411125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.411151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.411336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.411365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.411554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.411580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.411871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.411934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.412119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.412145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.412300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.412326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.412530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.412557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.412693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.412735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.412918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.412944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.413113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.413157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.413337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.413365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.413605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.413631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.413798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.413825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.414012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.414041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.414239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.414268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.414501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.414527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.414693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.414720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.414934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.414963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.415151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.415181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.415360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.415389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.415574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.415602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.415811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.415840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.416020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.416048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.416226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.416252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.416415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.416441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.416651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.416680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.416859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.416887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.417094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.417127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.417341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.417367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.417531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.414 [2024-07-26 18:33:47.417559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.414 qpair failed and we were unable to recover it. 00:33:21.414 [2024-07-26 18:33:47.417708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.417737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.417918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.417947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.418129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.418155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.418295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.418321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.418480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.418522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.418679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.418708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.418857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.418882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.419026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.419052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.419224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.419250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.419410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.419436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.419626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.419652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.419863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.419892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.420075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.420123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.420260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.420286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.420449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.420476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.420654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.420683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.420860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.420893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.421077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.421107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.421276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.421302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.421458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.421501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.421681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.421707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.421870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.421897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.422077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.422103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.422255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.422284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.422490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.422518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.422695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.422724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.422879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.415 [2024-07-26 18:33:47.422905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.415 qpair failed and we were unable to recover it. 00:33:21.415 [2024-07-26 18:33:47.423072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.423114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.423290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.423318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.423580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.423632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.423855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.423881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.424073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.424102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.424277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.424306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.424507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.424559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.424743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.424769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.424937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.424964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.425122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.425148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.425364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.425428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.425635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.425662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.425864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.425893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.426085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.426112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.426279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.426305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.426453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.426479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.426691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.426720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.426932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.426961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.427176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.427205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.427390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.427416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.427612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.427641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.427823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.427851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.428022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.428051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.428222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.428248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.428436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.428463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.428651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.428677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.428848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.428891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.429082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.429109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.429276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.429303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.429473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.429502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.429679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.429712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.429893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.429919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.430087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.430114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.430271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.416 [2024-07-26 18:33:47.430307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.416 qpair failed and we were unable to recover it. 00:33:21.416 [2024-07-26 18:33:47.430491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.430521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.430727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.430753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.430941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.430971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.431135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.431162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.431310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.431354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.431530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.431556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.431697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.431741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.431931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.431958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.432143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.432172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.432329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.432355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.432552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.432581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.432731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.432760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.432921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.432949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.433110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.433137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.433273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.433316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.433494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.433522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.433665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.433694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.433850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.433876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.434012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.434056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.434245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.434274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.434421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.434450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.434623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.434649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.434805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.434834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.435016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.435049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.435239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.435268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.435420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.435446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.435580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.435622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.435802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.435831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.435971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.436000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.436161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.436188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.436355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.436381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.436540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.436566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.436718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.436746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.417 [2024-07-26 18:33:47.436927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.417 [2024-07-26 18:33:47.436953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.417 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.437113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.437142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.437357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.437386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.437568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.437614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.437810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.437836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.438021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.438050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.438237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.438267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.438417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.438446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.438633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.438658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.438824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.438850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.439009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.439035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.439172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968470 is same with the state(5) to be set 00:33:21.418 [2024-07-26 18:33:47.439438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.439479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.439625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.439653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.439856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.439884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.440025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.440053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.440228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.440255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.440444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.440471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.440647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.440673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.440855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.440899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.441057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.441097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.441307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.441353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.441534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.441578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.441778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.441823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.441991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.442018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.442167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.442196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.442415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.442460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.442623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.442667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.442866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.442910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.443078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.443130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.443363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.443393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.443590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.443645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.443836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.443881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.444041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.418 [2024-07-26 18:33:47.444074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.418 qpair failed and we were unable to recover it. 00:33:21.418 [2024-07-26 18:33:47.444229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.444274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.444431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.444461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.444676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.444720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.444861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.444899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.445071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.445099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.445302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.445332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.445513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.445557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.445752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.445796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.445934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.445961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.446130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.446157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.446320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.446364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.446557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.446587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.446758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.446802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.446972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.446999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.447198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.447242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.447409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.447439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.447590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.447619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.447835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.447894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.448072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.448116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.448257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.448285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.448490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.448519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.448669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.448698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.448879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.448915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.449073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.449099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.450042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.450090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.450283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.450310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.450486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.450513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.450692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.450722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.450880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.450910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.451095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.451122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.451289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.451316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.451490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.451521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.451744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.451805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.419 [2024-07-26 18:33:47.451974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.419 [2024-07-26 18:33:47.452000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.419 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.452164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.452190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.452356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.452382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.452521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.452564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.452744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.452774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.452956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.452985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.453183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.453210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.453368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.453397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.453582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.453613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.453762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.453791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.453940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.453969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.454176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.454203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.454389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.454418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.454564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.454593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.454772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.454801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.454984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.455013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.455234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.455261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.455405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.455431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.455637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.455670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.455889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.455918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.456070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.456109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.456253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.456278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.456473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.456501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.456678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.456706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.456850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.456879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.457033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.457073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.457261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.457287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.457442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.457471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.457657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.457685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.457890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.457919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.458124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.458150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.458334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.458377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.458573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.420 [2024-07-26 18:33:47.458600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.420 qpair failed and we were unable to recover it. 00:33:21.420 [2024-07-26 18:33:47.458756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.458797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.458948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.458977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.459148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.459175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.459311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.459352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.459516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.459545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.459797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.459826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.459986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.460015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.460188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.460216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.460352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.460379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.460565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.460594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.460744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.460774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.460989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.461018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.461219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.461245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.461388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.461415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.461624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.461654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.461804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.461832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.462011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.462041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.462226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.462253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.462408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.462434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.462612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.462641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.462848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.462876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.463025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.463054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.463222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.463248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.463402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.463442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.463631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.463677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.463864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.463908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.464126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.464159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.464352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.464397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.464605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.464648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.464843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.421 [2024-07-26 18:33:47.464888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.421 qpair failed and we were unable to recover it. 00:33:21.421 [2024-07-26 18:33:47.465089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.465128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.465285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.465330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.465490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.465520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.465706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.465751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.465920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.465949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.466145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.466191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.466354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.466408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.466606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.466650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.466813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.466840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.467027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.467055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.467236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.467283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.467471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.467498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.467661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.467687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.467832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.467859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.467998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.468026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.468199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.468245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.468448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.468478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.468687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.468716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.468872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.468899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.469030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.469057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.469245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.469290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.469451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.469495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.469683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.469737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.469912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.469951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.470119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.470146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.470288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.470314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.470479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.470508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.470688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.470719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.470878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.470908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.471092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.471126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.471262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.471288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.471471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.471500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.471700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.471729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.422 [2024-07-26 18:33:47.471936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.422 [2024-07-26 18:33:47.471965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.422 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.472157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.472184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.472343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.472372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.472573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.472606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.472781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.472811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.472982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.473023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.473177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.473206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.473365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.473422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.473613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.473657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.473844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.473889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.474077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.474105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.474270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.474315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.474486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.474531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.474719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.474762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.474927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.474954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.475146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.475191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.475379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.475424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.475628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.475673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.475813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.475841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.476006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.476034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.476243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.476271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.476450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.476495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.476664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.476708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.476884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.476910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.477099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.477157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.477317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.477364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.477558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.477590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.477753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.477780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.477943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.477970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.478150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.478179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.478342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.478375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.478564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.478592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.478768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.478814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.478987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.423 [2024-07-26 18:33:47.479015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.423 qpair failed and we were unable to recover it. 00:33:21.423 [2024-07-26 18:33:47.479160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.479187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.479384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.479434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.479619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.479650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.479810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.479839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.479999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.480027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.480204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.480234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.480422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.480467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.480688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.480745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.480922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.480950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.481138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.481184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.481375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.481413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.481590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.481620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.481794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.481829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.481982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.482007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.482151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.482177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.482321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.482368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.482546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.482576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.482728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.482757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.482935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.482965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.483178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.483218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.483411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.483457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.483649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.483694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.483890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.483938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.484108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.484166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.484364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.484410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.484681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.484729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.484903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.484942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.485123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.485169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.485359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.485389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.485578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.485630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.485783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.485812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.486007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.486035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.486231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.486277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.424 qpair failed and we were unable to recover it. 00:33:21.424 [2024-07-26 18:33:47.486445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.424 [2024-07-26 18:33:47.486491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.486697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.486749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.486954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.486981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.487141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.487188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.487387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.487418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.487606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.487656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.487863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.487890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.488026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.488051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.488238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.488265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.488455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.488498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.488654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.488708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.488906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.488933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.489072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.489098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.489268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.489294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.489474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.489504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.489656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.489686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.489914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.489944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.490143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.490170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.490311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.490360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.490561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.490603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.490788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.490817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.490993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.491023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.491221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.491248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.491396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.491429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.491584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.491614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.491799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.491832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.492016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.492045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.492219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.492245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.492379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.492406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.492578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.492607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.492827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.492857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.493050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.493084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.493253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.493279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.493484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.425 [2024-07-26 18:33:47.493513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.425 qpair failed and we were unable to recover it. 00:33:21.425 [2024-07-26 18:33:47.493690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.493719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.493930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.493959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.494128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.494155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.494320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.494355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.494570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.494599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.494781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.494811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.495010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.495037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.495212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.495239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.495442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.495472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.495633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.495676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.495854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.495888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.496044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.496080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.496234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.496260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.496433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.496460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.496596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.496623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.496778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.496808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.497015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.497044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.497246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.497272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.497459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.497489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.497671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.497701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.497856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.497886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.498071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.498099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.498236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.498262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.498439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.498466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.498674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.498723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.498909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.498938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.499097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.499139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.499302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.499338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.426 qpair failed and we were unable to recover it. 00:33:21.426 [2024-07-26 18:33:47.499498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.426 [2024-07-26 18:33:47.499526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.499683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.499713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.499896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.499925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.500122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.500148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.500286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.500312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.500528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.500558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.500708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.500737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.500921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.500950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.501126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.501153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.501282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.501308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.501476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.501506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.501693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.501736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.501935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.501964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.502161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.502188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.502325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.502357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.502508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.502538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.502713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.502742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.502922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.502953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.503117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.503145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.503300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.503353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.503562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.503591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.503774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.503803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.503983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.504009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.504185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.504225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.504386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.504415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.504632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.504676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.504866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.504910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.505083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.505122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.505280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.505307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.505520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.505556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.505784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.505830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.505995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.506023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.506220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.506248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.506453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.427 [2024-07-26 18:33:47.506498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.427 qpair failed and we were unable to recover it. 00:33:21.427 [2024-07-26 18:33:47.506650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.506679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.506911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.506941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.507108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.507142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.507315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.507364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.507542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.507591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.507753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.507783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.507948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.507975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.508115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.508142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.508308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.508362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.508520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.508549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.508753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.508788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.508931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.508972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.509166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.509193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.509332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.509358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.509488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.509536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.509692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.509722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.509896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.509932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.510088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.510132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.510321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.510369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.510561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.510591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.510767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.510797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.510965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.510991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.511153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.511180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.511353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.511382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.511527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.511558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.511758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.511805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.511955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.511985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.512158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.512184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.512321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.512347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.512510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.512553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.428 [2024-07-26 18:33:47.512726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.428 [2024-07-26 18:33:47.512757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.428 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.512936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.512966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.513195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.513223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.513403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.513433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.513647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.513675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.513868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.513897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.514079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.514124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.514263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.514290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.514475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.514504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.514708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.514737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.514917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.514946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.515134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.515161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.515300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.515327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.515526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.515560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.515762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.515791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.706 qpair failed and we were unable to recover it. 00:33:21.706 [2024-07-26 18:33:47.515929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.706 [2024-07-26 18:33:47.515958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.516154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.516181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.516323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.516350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.516536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.516562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.516743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.516788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.516963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.516991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.517160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.517186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.517320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.517347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.517501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.517529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.517681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.517710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.517857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.517887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.518087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.518130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.518359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.518403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.518585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.518641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.518802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.518833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.519012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.519043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.519251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.519292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.519514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.519545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.519740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.519785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.519950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.519977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.520150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.520178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.520326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.520355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.520523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.520550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.520689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.520717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.520852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.520880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.521022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.521055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.521250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.521294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.521491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.521536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.521746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.521791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.521990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.522017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.522209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.522254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.522417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.522447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.522631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.522676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.522858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.522904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.523094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.523122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.523287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.523331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.523517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.523562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.523784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.523829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.524003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.524030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.524207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.524234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.524429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.524474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.524683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.524726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.524868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.524896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.525033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.525065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.525253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.525299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.525521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.525565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.525793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.525843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.526009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.526036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.526209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.526262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.526412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.526455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.526608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.526654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.526830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.526881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.527065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.527125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.527297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.527330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.527507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.527540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.527728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.527760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.527938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.527969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.528178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.528210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.528394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.528426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.528605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.528650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.528864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.707 [2024-07-26 18:33:47.528914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.707 qpair failed and we were unable to recover it. 00:33:21.707 [2024-07-26 18:33:47.529055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.529087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.529252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.529282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.529484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.529530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.529716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.529760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.529901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.529932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.530112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.530143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.530338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.530383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.530573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.530618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.530769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.530810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.530983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.531012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.531168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.531199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.531377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.531407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.531584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.531632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.531823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.531852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.532033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.532076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.532225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.532252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.532443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.532472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.532674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.532724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.533009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.533069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.533257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.533283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.533450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.533480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.533712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.533758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.533936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.533966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.534151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.534178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.534323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.534365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.534542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.534571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.534715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.534744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.534947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.534975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.535166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.535193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.535350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.535377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.535517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.535560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.535734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.535763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.535972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.536001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.536192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.536219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.536370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.536399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.536611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.536661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.536861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.536887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.537085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.537128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.537267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.537293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.537480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.537509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.537681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.537709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.537858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.537887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.538068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.538113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.538279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.538306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.538471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.538501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.538693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.538738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.538940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.538972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.539146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.539174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.539384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.539418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.539607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.539637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.539809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.539842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.540037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.540074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.540264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.540292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.540479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.540507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.540726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.540757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.540924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.540956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.541164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.541194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.541377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.541409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.541563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.541602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.708 [2024-07-26 18:33:47.541788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.708 [2024-07-26 18:33:47.541821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.708 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.541988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.542016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.542187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.542215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.542384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.542411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.542567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.542598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.542783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.542812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.543004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.543035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.543212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.543239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.543371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.543400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.543570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.543598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.543803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.543834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.544022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.544056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.544221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.544249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.544398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.544425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.544603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.544630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.544830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.544859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.545047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.545079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.545263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.545293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.545479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.545509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.545683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.545711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.545872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.545900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.546070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.546098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.546291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.546321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.546502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.546531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.546695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.546723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.546953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.546983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.547141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.547190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.547403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.547432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.547612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.547643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.547861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.547918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.548112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.548140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.548306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.548334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.548562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.548589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.548757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.548785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.548950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.548981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.549159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.549188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.549355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.549385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.549551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.549582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.549797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.549824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.550010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.550038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.550246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.550277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.550450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.550481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.550645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.550673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.550862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.550889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.551090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.551121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.551316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.551344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.551506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.551536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.551744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.551771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.551962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.551990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.552129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.552157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.552294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.552323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.552490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.552519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.552709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.709 [2024-07-26 18:33:47.552740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.709 qpair failed and we were unable to recover it. 00:33:21.709 [2024-07-26 18:33:47.552924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.552955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.553148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.553176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.553390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.553420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.553638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.553686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.553878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.553905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.554081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.554112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.554285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.554315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.554500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.554527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.554687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.554719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.554912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.554942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.555103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.555131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.555299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.555326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.555556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.555608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.555818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.555849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.556028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.556063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.556274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.556301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.556464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.556491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.556627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.556672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.556861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.556894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.557082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.557109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.557271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.557300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.557464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.557502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.557737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.557764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.557927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.557957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.558107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.558137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.558295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.558321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.558484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.558512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.558650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.558695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.558884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.558911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.559073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.559100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.559251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.559281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.559461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.559487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.559627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.559653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.559840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.559867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.560071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.560098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.560254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.560283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.560458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.560505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.560717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.560743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.560903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.560930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.561054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.561106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.561269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.561295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.561428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.561456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.561719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.561767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.561980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.562006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.562148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.562176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.562383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.562412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.562563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.562590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.562723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.562765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.562947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.562976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.563160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.563187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.563327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.563354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.563561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.563610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.563795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.563822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.563985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.564019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.564229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.564270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.564470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.564500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.564690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.564720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.564945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.564997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.565204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.565232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.565395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.565426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.565645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.710 [2024-07-26 18:33:47.565698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.710 qpair failed and we were unable to recover it. 00:33:21.710 [2024-07-26 18:33:47.565913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.565940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.566125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.566157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.566356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.566388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.566555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.566582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.566715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.566742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.566908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.566953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.567173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.567201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.567378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.567405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.567545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.567571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.567755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.567781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.567994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.568024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.568189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.568229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.568425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.568454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.568664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.568694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.568900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.568930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.569110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.569147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.569361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.569390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.569671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.569720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.569916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.569943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.570154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.570183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.570337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.570367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.570530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.570557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.570769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.570798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.570956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.570985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.571167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.571194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.571350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.571378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.571583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.571632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.571795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.571822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.572003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.572032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.572202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.572248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.572438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.572467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.572635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.572663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.572843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.572879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.573066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.573098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.573280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.573307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.573600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.573648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.573831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.573858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.574019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.574049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.574234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.574264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.574471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.574498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.574700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.574731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.574919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.574948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.575115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.575142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.575359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.575389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.575563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.575610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.575820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.575846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.576014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.576043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.576233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.576262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.576429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.576457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.576650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.576680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.576841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.576869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.577034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.577066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.577226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.577256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.577437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.577467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.577656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.577683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.577860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.577889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.578048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.578091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.578258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.578286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.578508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.578538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.578732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.578782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.578987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.711 [2024-07-26 18:33:47.579014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.711 qpair failed and we were unable to recover it. 00:33:21.711 [2024-07-26 18:33:47.579206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.579234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.579393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.579422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.579598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.579624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.579836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.579866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.580048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.580086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.580277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.580304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.580441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.580467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.580631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.580675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.580859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.580885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.581048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.581086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.581267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.581294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.581459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.581489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.581654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.581682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.581869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.581898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.582083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.582110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.582260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.582289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.582524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.582579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.582769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.582797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.583008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.583038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.583190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.583219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.583402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.583428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.583606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.583635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.583842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.583868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.584040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.584072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.584288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.584317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.584531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.584561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.584721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.584748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.584952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.584982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.585167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.585195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.585378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.585404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.585619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.585648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.585833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.585862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.586049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.586081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.586268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.586297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.586480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.586509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.586685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.586712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.586899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.586929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.587142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.587169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.587336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.587364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.587575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.587604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.587779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.587809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.587969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.587996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.588202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.588233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.588432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.588462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.588619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.588646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.588822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.588851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.589056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.589086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.589273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.589299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.589464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.589494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.589702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.589729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.589910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.589940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.590158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.590189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.590375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.590404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.590618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.590645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.590856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.590882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.712 [2024-07-26 18:33:47.591042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.712 [2024-07-26 18:33:47.591075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.712 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.591267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.591295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.591480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.591509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.591686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.591716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.591929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.591956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.592159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.592189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.592362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.592391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.592608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.592635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.592822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.592852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.593037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.593072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.593281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.593308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.593494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.593523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.593705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.593734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.593908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.593933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.594141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.594171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.594341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.594371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.594582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.594609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.594749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.594776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.594991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.595020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.595203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.595229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.595405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.595435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.595623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.595653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.595841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.595869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.596082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.596112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.596289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.596318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.596506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.596532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.596696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.596723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.596887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.596918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.597101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.597128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.597289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.597316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.597505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.597536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.597746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.597773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.597980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.598009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.598195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.598226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.598393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.598421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.598559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.598586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.598751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.598783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.598927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.598954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.599090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.599117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.599280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.599308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.599497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.599524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.599712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.599742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.599900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.599929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.600097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.600135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.600289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.600316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.600475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.600503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.600668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.600695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.600830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.600858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.600987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.601014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.601185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.601212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.601402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.601433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.601625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.601653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.601814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.601841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.602078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.602109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.602291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.602322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.602504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.602531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.602737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.602767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.602952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.602982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.603201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.603229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.603390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.603420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.603613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.603640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.713 [2024-07-26 18:33:47.603834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.713 [2024-07-26 18:33:47.603862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.713 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.604054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.604097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.604280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.604310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.604472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.604500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.604664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.604692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.604851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.604879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.605045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.605077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.605238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.605265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.605432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.605460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.605616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.605643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.605825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.605854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.606025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.606055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.606228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.606256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.606424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.606470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.606660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.606690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.606884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.606915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.607115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.607142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.607284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.607310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.607487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.607514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.607681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.607708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.607889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.607918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.608092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.608119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.608333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.608362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.608567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.608597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.608783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.608810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.608992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.609023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.609214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.609241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.609411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.609438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.609602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.609629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.609766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.609793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.609950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.609977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.610140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.610167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.610330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.610360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.610572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.610599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.610812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.610842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.611049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.611098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.611297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.611324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.611507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.611536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.611730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.611757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.611913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.611941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.612160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.612190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.612382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.612409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.612602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.612629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.612816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.612846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.613053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.613090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.613248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.613275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.613436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.613463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.613664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.613694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.613854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.613880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.614085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.614116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.614275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.614306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.614517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.614544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.614748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.614777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.614961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.614991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.615149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.615175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.615357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.615391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.615531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.615561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.615714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.615740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.615951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.615980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.714 [2024-07-26 18:33:47.616120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.714 [2024-07-26 18:33:47.616150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.714 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.616315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.616342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.616527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.616570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.616764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.616792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.616954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.616980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.617175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.617205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.617386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.617416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.617631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.617659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.617822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.617852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.618032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.618069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.618284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.618310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.618478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.618504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.618689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.618720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.618906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.618933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.619116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.619147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.619328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.619359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.619543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.619569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.619756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.619785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.619978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.620005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.620166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.620194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.620404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.620433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.620607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.620636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.620817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.620844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.621031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.621067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.621248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.621279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.621474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.621501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.621664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.621690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.621879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.621909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.622132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.622159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.622348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.622379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.622585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.622615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.622824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.622852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.623005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.623035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.623254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.623283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.623471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.623497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.623632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.623658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.623827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.623859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.624018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.624045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.624206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.624235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.624445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.624472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.624639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.624667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.624828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.624854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.625065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.625096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.625246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.625272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.625405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.625432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.625593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.625620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.625807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.625834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.626023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.626053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.626235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.626261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.626408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.626435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.626619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.626648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.626802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.626831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.627044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.715 [2024-07-26 18:33:47.627077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.715 qpair failed and we were unable to recover it. 00:33:21.715 [2024-07-26 18:33:47.627266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.627296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.627450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.627480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.627666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.627692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.627871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.627900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.628051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.628086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.628290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.628317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.628502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.628532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.628705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.628736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.628911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.628938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.629125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.629154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.629313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.629344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.629554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.629581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.629801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.629830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.630004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.630034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.630235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.630262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.630471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.630499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.630716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.630743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.630928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.630956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.631134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.631164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.631314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.631344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.631520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.631548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.631693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.631719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.631853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.631881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.632047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.632085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.632274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.632303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.632484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.632510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.632696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.632723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.632882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.632912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.633100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.633131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.633295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.633321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.633506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.633535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.633713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.633741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.633948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.633978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.634167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.634195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.634332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.634359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.634519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.634545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.634754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.634784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.634970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.635000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.635190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.635217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.635429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.635460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.635637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.635665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.635871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.635898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.636090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.636120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.636277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.636307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.636495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.636523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.636708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.636734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.636874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.636900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.637092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.637119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.637251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.637278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.637446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.637490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.637658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.637685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.637868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.637897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.638080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.638110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.638326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.638353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.638546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.638576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.638756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.638785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.716 [2024-07-26 18:33:47.638952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.716 [2024-07-26 18:33:47.638978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.716 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.639184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.639215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.639369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.639399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.639561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.639587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.639742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.639768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.639905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.639933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.640120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.640147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.640339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.640369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.640568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.640598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.640788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.640815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.640975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.641002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.641205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.641236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.641400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.641428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.641592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.641619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.641831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.641860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.642047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.642080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.642289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.642319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.642498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.642527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.642698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.642725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.642920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.642950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.643143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.643170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.643320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.643346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.643506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.643533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.643718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.643749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.643960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.643987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.644152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.644179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.644358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.644388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.644578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.644605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.644773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.644819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.645004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.645034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.645211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.645239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.645407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.645433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.645601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.645628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.645813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.645839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.645996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.646025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.646222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.646249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.646439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.646465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.646649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.646679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.646836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.646866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.647029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.647057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.647229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.647256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.647469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.647498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.647682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.647708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.647842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.647869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.648029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.648080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.648293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.648319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.648506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.648535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.648743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.648775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.648973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.649000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.649207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.649237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.649392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.649421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.649629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.649655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.649869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.649898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.650115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.650141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.650328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.650355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.650565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.650595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.650738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.650768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.650983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.651010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.651154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.717 [2024-07-26 18:33:47.651179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.717 qpair failed and we were unable to recover it. 00:33:21.717 [2024-07-26 18:33:47.651347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.651373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.651498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.651525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.651707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.651736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.651927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.651956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.652139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.652165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.652311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.652338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.652516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.652546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.652732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.652759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.652970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.653000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.653168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.653197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.653409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.653435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.653596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.653622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.653761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.653789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.653978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.654005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.654145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.654172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.654314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.654341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.654508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.654535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.654717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.654746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.654898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.654930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.655153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.655181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.655369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.655398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.655546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.655582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.655772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.655799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.655989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.656017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.656237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.656267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.656460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.656487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.656673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.656700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.656929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.656957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.657104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.657132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.657354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.657385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.657556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.657587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.657767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.657799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.657940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.657968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.658133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.658172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.658340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.658367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.658548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.658577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.658768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.658798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.658984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.659011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.659177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.659204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.659337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.659364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.659531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.659558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.659744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.659775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.659965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.659996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.660170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.660198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.660406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.660435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.660646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.660672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.660838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.660864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.661010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.718 [2024-07-26 18:33:47.661040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.718 qpair failed and we were unable to recover it. 00:33:21.718 [2024-07-26 18:33:47.661235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.661263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.661424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.661451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.661634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.661663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.661870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.661898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.662052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.662087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.662247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.662275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.662435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.662478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.662641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.662671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.662856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.662884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.663074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.663104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.663266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.663293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.663479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.663507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.663693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.663722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.663902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.663929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.664109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.664139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.664309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.664337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.664499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.664526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.664738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.664767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.664983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.665009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.665200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.665227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.665401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.665431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.665628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.665655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.665812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.665840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.666027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.666055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.666247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.666276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.666454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.666481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.666692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.666722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.666892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.666922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.667114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.667142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.667337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.667366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.667545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.667573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.667755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.667782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.667989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.668019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.668187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.668214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.668378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.668406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.668592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.668622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.668794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.668823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.668983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.669010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.669146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.669174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.669354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.669383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.669589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.669615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.669804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.669835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.670016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.670043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.670240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.670267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.670459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.670490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.670654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.670681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.670851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.670877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.671042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.671079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.671239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.671269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.671481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.671508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.671691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.671721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.671896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.671925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.672106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.672133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.672293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.672323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.672502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.672532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.672716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.672743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.672950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.672980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.719 [2024-07-26 18:33:47.673186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.719 [2024-07-26 18:33:47.673215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.719 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.673390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.673416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.673596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.673626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.673807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.673837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.674026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.674053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.674227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.674256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.674432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.674461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.674633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.674660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.674845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.674876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.675056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.675092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.675306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.675334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.675549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.675579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.675752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.675781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.675950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.675979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.676167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.676195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.676361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.676388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.676526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.676552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.676721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.676747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.676914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.676943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.677131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.677158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.677315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.677344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.677536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.677562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.677698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.677723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.677881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.677908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.678082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.678110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.678302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.678329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.678518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.678548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.678728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.678759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.678946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.678974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.679116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.679143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.679311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.679342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.679511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.679538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.679726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.679756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.679972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.680002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.680198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.680226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.680403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.680433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.680612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.680643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.680835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.680863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.681024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.681051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.681252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.681279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.681453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.681481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.681668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.681698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.681899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.681928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.682116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.682144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.682309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.682351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.682532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.682562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.682741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.682767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.682953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.682982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.683173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.683201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.683347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.683376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.683587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.683616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.683803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.683835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.684040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.684082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.684278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.684309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.684496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.684523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.684690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.684717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.684873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.684903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.720 [2024-07-26 18:33:47.685090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.720 [2024-07-26 18:33:47.685120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.720 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.685313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.685340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.685545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.685575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.685790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.685821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.685977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.686005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.686197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.686231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.686419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.686448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.686612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.686640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.686816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.686847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.687032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.687070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.687227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.687254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.687419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.687464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.687673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.687703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.687900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.687933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.688125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.688157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.688375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.688403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.688546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.688573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.688760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.688788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.688953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.688982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.689191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.689219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.689423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.689454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.689634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.689662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.689796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.689831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.690026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.690055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.690250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.690277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.690476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.690504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.690689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.690730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.690940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.690968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.691131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.691159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.691327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.691363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.691552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.691582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.691767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.691795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.691932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.691959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.692125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.692171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.692336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.692363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.692573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.692610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.692764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.692794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.692957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.692984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.693126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.693178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.693366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.693395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.693615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.693642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.693841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.693874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.694073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.694101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.694259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.694286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.694431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.694458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.694660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.694706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.694905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.694938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.695085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.695115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.695337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.695367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.695531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.695559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.695748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.695778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.695950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.695996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.696189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.696219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.696369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.721 [2024-07-26 18:33:47.696402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.721 qpair failed and we were unable to recover it. 00:33:21.721 [2024-07-26 18:33:47.696575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.696603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.696806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.696835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.697023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.697053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.697215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.697245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.697465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.697493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.697644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.697673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.697872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.697901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.698075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.698103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.698273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.698300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.698461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.698488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.698656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.698684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.698823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.698851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.698986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.699015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.699196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.699224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.699420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.699453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.699646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.699674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.699832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.699862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.700050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.700086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.700241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.700272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.700489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.700517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.700687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.700718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.700893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.700923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.701130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.701158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.701324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.701354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.701564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.701596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.701757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.701784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.701972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.702002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.702165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.702196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.702377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.702405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.702620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.702652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.702813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.702843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.703048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.703086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.703270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.703299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.703594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.703652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.703860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.703888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.704087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.704121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.704280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.704311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.704524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.704552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.704735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.704766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.704968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.705004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.705187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.705215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.705402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.705433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.705704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.705755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.705936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.705963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.706160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.706193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.706360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.706390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.706563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.706591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.706802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.706834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.707053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.707089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.707252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.707281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.707468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.707501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.707781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.707832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.708019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.708047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.708248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.708281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.708442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.708473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.722 [2024-07-26 18:33:47.708659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.722 [2024-07-26 18:33:47.708687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.722 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.708852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.708880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.709028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.709056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.709237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.709265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.709424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.709468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.709643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.709674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.709863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.709890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.710076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.710108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.710292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.710323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.710490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.710518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.710709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.710737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.710939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.710970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.711144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.711172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.711354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.711385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.711567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.711598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.711788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.711816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.712022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.712053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.712263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.712291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.712431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.712458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.712599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.712627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.712792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.712838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.713017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.713047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.713223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.713253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.713446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.713476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.713701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.713734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.713898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.713929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.714083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.714115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.714307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.714335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.714523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.714554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.714739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.714767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.714939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.714967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.715152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.715183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.715332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.715363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.715545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.715573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.715786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.715816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.716012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.716040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.716240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.716268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.716422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.716453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.716623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.716654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.716858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.716885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.717029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.717065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.717236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.717265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.717429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.717456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.717624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.717654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.717819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.717847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.718005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.718037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.718256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.718284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.718481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.718513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.718696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.718724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.718910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.718941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.719094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.719125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.719315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.719343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.719530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.719561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.719736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.719766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.719956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.719985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.720126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.720155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.720353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.720381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.720571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.720598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.723 qpair failed and we were unable to recover it. 00:33:21.723 [2024-07-26 18:33:47.720761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.723 [2024-07-26 18:33:47.720792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.720979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.721009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.721211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.721242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.721457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.721487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.721666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.721696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.721882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.721909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.722122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.722158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.722373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.722404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.722592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.722619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.722796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.722826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.723005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.723035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.723257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.723284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.723476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.723507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.723699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.723727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.723863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.723891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.724075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.724107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.724316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.724347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.724512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.724539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.724749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.724780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.724932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.724963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.725158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.725187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.725375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.725407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.725591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.725622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.725807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.725835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.726006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.726035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.726240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.726284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.726446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.726475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.726668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.726698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.726966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.727016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.727210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.727238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.727430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.727460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.727682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.727710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.727873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.727901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.728055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.728090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.728248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.728275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.728411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.728437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.728623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.728652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.728830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.728857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.728992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.729019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.729186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.729213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.729395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.729425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.729582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.729608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.729831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.729860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.730009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.730039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.730204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.730230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.730413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.730443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.730620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.730654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.730868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.730895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.731062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.731090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.731226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.731255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.731455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.731483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.731685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.724 [2024-07-26 18:33:47.731711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.724 qpair failed and we were unable to recover it. 00:33:21.724 [2024-07-26 18:33:47.731874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.731900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.732069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.732096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.732309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.732340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.732522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.732552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.732715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.732742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.732909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.732950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.733154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.733182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.733362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.733389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.733617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.733647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.733830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.733860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.734026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.734053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.734263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.734289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.734450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.734479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.734667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.734694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.734870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.734900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.735082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.735112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.735277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.735303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.735462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.735488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.735650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.735694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.735853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.735880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.736056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.736093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.736245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.736274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.736456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.736483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.736664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.736694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.736845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.736875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.737041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.737072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.737208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.737252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.737464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.737493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.737679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.737706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.737880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.737910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.738092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.738122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.738315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.738342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.738500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.738529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.738713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.738743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.738911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.738941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.739080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.739104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.739281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.739311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.739468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.739494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.739675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.739704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.739886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.739916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.740132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.740159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.740371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.740398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.740583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.740610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.740786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.740813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.741000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.741030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.741232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.741259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.741392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.741419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.741584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.741611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.741776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.741803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.741964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.741991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.742127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.742153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.742331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.742358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.742569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.742595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.742779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.742810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.742993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.743023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.725 qpair failed and we were unable to recover it. 00:33:21.725 [2024-07-26 18:33:47.743185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.725 [2024-07-26 18:33:47.743212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.743393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.743423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.743602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.743632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.743848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.743875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.744088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.744118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.744301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.744331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.744493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.744520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.744701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.744732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.744908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.744938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.745121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.745148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.745358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.745388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.745548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.745575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.745735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.745762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.745948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.745977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.746181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.746211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.746415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.746442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.746613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.746641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.746809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.746836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.747051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.747087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.747271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.747302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.747488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.747518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.747705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.747732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.747910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.747939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.748116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.748146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.748312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.748339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.748522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.748553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.748738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.748768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.748973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.749000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.749198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.749228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.749406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.749437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.749646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.749673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.749806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.749833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.750018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.750048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.750266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.750293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.750508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.750538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.750688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.750719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.750930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.750957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.751134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.751163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.751315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.751344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.751555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.751582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.751791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.751820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.751985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.752017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.752186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.752214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.752426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.752456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.752671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.752701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.752884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.752914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.753115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.753143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.753300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.753327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.753530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.753557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.753720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.753746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.753906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.753934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.754115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.754143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.754356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.754385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.754590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.754620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.754825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.754852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.755068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.755098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.726 [2024-07-26 18:33:47.755273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.726 [2024-07-26 18:33:47.755305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.726 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.755465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.755493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.755661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.755688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.755894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.755928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.756113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.756140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.756274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.756317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.756472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.756503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.756677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.756703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.756867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.756896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.757082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.757112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.757299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.757327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.757483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.757510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.757692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.757721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.757903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.757930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.758091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.758122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.758326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.758356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.758541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.758568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.758751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.758780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.758961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.758990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.759158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.759186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.759327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.759355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.759546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.759573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.759767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.759793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.760006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.760035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.760232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.760259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.760448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.760476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.760656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.760685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.760874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.760901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.761069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.761096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.761311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.761340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.761524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.761554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.761772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.761798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.761965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.761994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.762153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.762183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.762396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.762423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.762599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.762628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.762799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.762829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.762995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.763022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.763204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.763231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.763425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.763454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.763619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.763646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.763824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.763851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.764066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.764096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.764282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.764309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.764500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.764530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.764727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.764756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.764966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.764996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.765162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.765189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.765368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.765398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.765621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.765648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.765804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.765833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.765985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-26 18:33:47.766016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.727 qpair failed and we were unable to recover it. 00:33:21.727 [2024-07-26 18:33:47.766225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.766253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.766433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.766463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.766646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.766672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.766838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.766866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.767049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.767085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.767295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.767324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.767498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.767525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.767734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.767764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.767946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.767976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.768167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.768195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.768409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.768439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.768652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.768681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.768865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.768892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.769105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.769134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.769288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.769318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.769493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.769520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.769700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.769730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.769909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.769939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.770090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.770122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.770288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.770315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.770476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.770503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.770633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.770659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.770846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.770876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.771086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.771116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.771305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.771343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.771491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.771519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.771658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.771686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.771848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.771875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.772006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.772033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.772199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.772244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.772433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.772460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.772618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.772648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.772870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.772898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.773095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.773122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.773291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.773318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.773454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.773481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.773647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.773675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.773887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.773917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.774089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.774119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.774305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.774332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.774505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.774535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.774739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.774768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.774977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.775004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.775187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.775215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.775399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.775429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.775622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.775650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.775800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.775830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.775980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.776009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.776196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.776224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.776394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.776421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.776584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.776610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.776780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.776807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.776986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.777016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.777230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.777257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.728 qpair failed and we were unable to recover it. 00:33:21.728 [2024-07-26 18:33:47.777389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-26 18:33:47.777416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.777584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.777612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.777762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.777791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.777980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.778006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.778193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.778225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.778409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.778439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.778648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.778675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.778836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.778865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.779045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.779084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.779275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.779302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.779470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.779497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.779673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.779703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.779879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.779908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.780107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.780135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.780298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.780325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.780550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.780577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.780729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.780760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.780922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.780949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.781095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.781123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.781290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.781317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.781494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.781524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.781682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.781709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.781884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.781913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.782098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.782128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.782296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.782323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.782510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.782537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.782696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.782725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.782957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.782987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.783177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.783205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.783363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.783393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.783601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.783627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.783803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.783831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.784019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.784050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.784221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.784248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.784411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.784439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.784655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.784685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.784877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.784903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.785067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.785094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.785281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.785308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.785550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.785576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.785732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.785762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.785993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.786023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.786239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.786266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.786453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.786483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.786686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.786720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.786891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.786921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.787083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.787111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.787278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.787305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.787462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.787489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.787679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.787709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.787894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.787921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.788069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.788098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.788306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.788335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.788511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.788541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.788726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.788752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.788921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.788948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.789114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.789159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.789340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.789367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.729 [2024-07-26 18:33:47.789561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-26 18:33:47.789589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.729 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.789794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.789823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.789999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.790027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.790179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.790206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.790416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.790446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.790626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.790653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.790807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.790836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.791041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.791078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.791256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.791284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.791493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.791523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.791678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.791708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.791912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.791938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.792126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.792156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.792335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.792364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.792552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.792579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.792765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.792796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.792970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.793000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.793168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.793196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.793324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.793367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.793556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.793583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.793716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.793744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.793948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.793978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.794153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.794183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.794344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.794372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.794553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.794583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.794764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.794794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.794978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.795009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.795142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.795169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.795328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.795371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.795550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.795577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.795760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.795789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.795960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.795990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.796179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.796206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.796395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.796424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.796604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.796634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.796842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.796870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.797010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.797036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.797201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.797247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.797460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.797487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.797627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.797654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.797820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.797848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.797981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.798009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.798203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.798231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.798419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.798451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.798620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.798647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.798821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.798850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.799054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.799089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.799267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.799294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.799508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.799537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.799757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.799784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.799923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.799950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.800113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.800141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.800330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.800357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.800495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.800522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.730 qpair failed and we were unable to recover it. 00:33:21.730 [2024-07-26 18:33:47.800655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.730 [2024-07-26 18:33:47.800682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.800815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.800842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.801013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.801040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.801191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.801218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.801385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.801412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.801542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.801568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.801710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.801738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.801934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.801960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.802120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.802147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.802334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.802361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.802524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.802550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.802693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.802719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.802883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.802917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.803112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.803139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.803272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.803299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.803438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.803465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.803631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.803658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.803846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.803873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.804032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.804065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.804232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.804260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.804397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.804424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.804589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.804616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.804779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.804806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.804968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.804995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.805172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.805201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.805377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.805407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.805626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.805653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.805787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.805832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.806005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.806035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.806267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.806295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.806478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.806507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.806690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.806719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.806876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.806903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.807091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.807118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.807286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.807313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.807470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.807497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.807679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.807710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.807921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.807951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.808114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.808142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.808275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.808302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.808517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.808547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.808739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.808765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.808953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.808983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.809176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.809207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.809373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.809400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.809565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.809608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.809756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.809786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.809994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.810021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.810170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.810198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.810356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.810384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.810549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.810577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.810735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.810765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.810944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.810980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.811189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.811217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.811363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.811390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.811581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.811610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.811767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.811793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.811979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.812009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.812159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.812190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.812378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.812405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.812616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.812645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.812795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.812825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.731 qpair failed and we were unable to recover it. 00:33:21.731 [2024-07-26 18:33:47.813015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.731 [2024-07-26 18:33:47.813042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.813228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.813258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.813435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.813465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.813676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.813703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.813845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.813873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.814014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.814041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.814224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.814251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.814461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.814491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.814636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.814665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.814849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.814876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.815030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.815057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.815206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.815234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.815396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.815424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.815576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.815603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.815787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.815818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.815981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.816008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.816176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.816203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.816370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.816398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.816564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.816591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.816778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.816807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.817014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.817044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.817215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.817242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.817417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.817444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.817645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.817672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.817868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.817895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.818085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.818115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.818299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.818329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.818543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.818570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.818752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.818782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.818962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.818993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.819183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.819215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.819396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.819426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.819606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.819636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.819852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.819879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.820044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.820080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.820288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.820317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.820478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.820504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.820689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.820719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.820895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.820924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.821112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.821139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.821325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.821355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.821533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.821562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.821775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.821802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.822023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.822052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.822240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.822270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.822437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.822464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.822669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.822699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.822855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.822895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.823079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.823106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.823260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.823302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.823504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.823534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.823714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.823741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.823953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.823983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.824132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.824161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.824350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.824378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.824558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.824588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.824736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.824766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.824980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.825009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.825225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.825252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.732 qpair failed and we were unable to recover it. 00:33:21.732 [2024-07-26 18:33:47.825409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.732 [2024-07-26 18:33:47.825453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.825658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.825685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.825868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.825898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.826088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.826118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.826298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.826325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.826503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.826532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.826736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.826765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.826952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.826980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.827165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.827196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.827348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.827378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.827557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.827584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.827728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.827775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.827928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.827967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.828168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.828195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.828322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.828349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.828604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.828634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.828848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.828875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.829037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.829074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.829238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.829265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.829420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.829446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.829660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.829689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.829893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.829921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.830085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.830112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.830249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.830275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.830429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.830472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.830666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.830692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:21.733 [2024-07-26 18:33:47.830858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.733 [2024-07-26 18:33:47.830884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:21.733 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.831075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.831130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.831332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.831360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.831571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.831601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.831807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.831837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.831985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.832015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.832217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.832244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.832460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.832491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.832661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.832691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.832872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.832902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.833131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.833158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.833357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.833385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.833607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.833637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.833844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.833874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.834050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.834089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.834257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.834284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.834451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.834478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.834620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.834648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.834813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.834857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.835048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.835088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.013 [2024-07-26 18:33:47.835275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.013 [2024-07-26 18:33:47.835312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.013 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.835487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.835518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.835682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.835710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.835876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.835922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.836080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.836135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.836277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.836308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.836516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.836545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.836688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.836717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.836876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.836903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.837070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.837096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.837265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.837290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.837433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.837459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.837627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.837653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.837865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.837897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.838078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.838116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.838263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.838289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.838485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.838515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.838733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.838760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.838916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.838946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.839149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.839176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.839344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.839371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.839520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.839549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.839705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.839734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.839929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.839955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.840151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.840178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.840355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.840399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.840555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.840582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.840786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.840816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.841026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.841055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.843072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.843126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.843283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.843309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.843444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.843470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.843663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.843690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.843885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.843914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.014 qpair failed and we were unable to recover it. 00:33:22.014 [2024-07-26 18:33:47.844097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.014 [2024-07-26 18:33:47.844141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.844289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.844315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.844477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.844503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.844695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.844725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.844937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.844964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.845190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.845217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.845386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.845415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.845580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.845607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.845773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.845799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.845987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.846016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.846213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.846240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.846433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.846463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.846649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.846683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.846895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.846922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.847109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.847152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.848071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.848102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.848312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.848342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.848511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.848541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.848710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.848737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.848867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.848891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.849064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.849090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.849257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.849283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.849448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.849475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.849629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.849658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.852094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.852131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.852353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.852381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.852585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.852615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.015 [2024-07-26 18:33:47.852791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.015 [2024-07-26 18:33:47.852820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.015 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.853008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.853035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.853223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.853250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.853402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.853429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.853620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.853646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.853790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.853816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.853980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.854007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.854144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.854170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.854363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.854393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.854568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.854598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.854806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.854833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.855080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.855117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.855338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.855366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.855543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.855570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.855738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.855767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.855946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.855987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.856223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.856249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.856433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.856463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.856650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.856680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.856858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.856886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.858088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.858126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.858318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.858356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.858508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.858534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.858765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.858796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.858979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.859008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.859261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.859288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.859521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.859551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.859715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.859745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.859955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.859983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.860159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.860185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.860365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.016 [2024-07-26 18:33:47.860395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.016 qpair failed and we were unable to recover it. 00:33:22.016 [2024-07-26 18:33:47.860602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.860629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.863070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.863101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.863289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.863318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.863521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.863547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.863748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.863774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.863953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.863982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.864220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.864248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.864450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.864480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.864669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.864699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.864879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.864905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.865107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.865137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.865335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.865362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.865591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.865633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.865832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.865863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.866085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.866123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.866287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.866313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.866508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.866534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.866695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.866725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.866910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.866938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.867107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.867136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.867344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.867373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.867564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.867591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.867778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.867812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.867967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.867996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.869070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.869109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.869300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.869336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.869542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.869571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.869743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.869771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.869942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.869970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.870167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.870193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.870428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.870455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.870625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.870656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.870863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.017 [2024-07-26 18:33:47.870893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.017 qpair failed and we were unable to recover it. 00:33:22.017 [2024-07-26 18:33:47.873070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.873101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.873319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.873359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.873569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.873598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.873780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.873807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.874044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.874087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.874263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.874289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.874444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.874471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.874688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.874718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.874864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.874891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.875115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.875141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.875370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.875401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.875589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.875618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.875788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.875814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.876031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.876082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.876280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.876306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.876495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.876524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.876785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.876814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.877029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.877066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.877242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.877268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.877458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.877488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.877666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.877693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.879074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.879114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.879336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.879363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.879552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.879580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.879788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.879815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.018 [2024-07-26 18:33:47.880032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.018 [2024-07-26 18:33:47.880071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.018 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.880307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.880346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.880536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.880563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.880719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.880749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.880921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.880951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.881163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.881195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.881387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.881414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.881599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.881627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.881823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.881851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.884072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.884104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.884294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.884322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.884515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.884542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.884759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.884789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.884972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.885002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.885207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.885235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.885374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.885401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.885569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.885613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.885807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.885834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.886001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.886032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.886231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.886261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.886448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.886476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.886692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.886723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.886934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.886964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.887146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.887175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.887357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.887388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.887601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.887631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.887811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.887840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.019 [2024-07-26 18:33:47.888029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.019 [2024-07-26 18:33:47.888068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.019 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.888217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.888247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.888435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.888463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.889042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.889083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.889291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.889319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.889482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.889514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.889694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.889726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.889907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.889937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.890144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.890173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.890395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.890424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.890609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.890639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.890831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.890858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.891073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.891103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.891284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.891313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.891470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.891497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.891659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.891686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.891876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.891905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.892083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.892110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.892243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.892276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.892480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.892510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.892722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.892749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.892883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.892908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.893070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.893098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.893274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.893300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.893437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.893462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.893624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.893651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.893818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.893845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.894008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.894038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.894232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.020 [2024-07-26 18:33:47.894262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.020 qpair failed and we were unable to recover it. 00:33:22.020 [2024-07-26 18:33:47.894477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.894504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.894673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.894700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.894872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.894901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.895055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.895092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.895271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.895301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.895514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.895541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.895702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.895729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.895916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.895945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.896121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.896151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.896342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.896368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.896502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.896528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.896716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.896745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.896953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.896980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.897171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.897201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.897372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.897402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.897594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.897621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.897787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.897813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.897988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.898022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.898189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.898217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.898435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.898465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.898615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.898645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.898801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.898827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.898991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.899017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.899198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.899226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.899388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.899414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.899599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.899629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.899775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.899804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.900014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.900040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.900229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.900260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.900466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.021 [2024-07-26 18:33:47.900495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.021 qpair failed and we were unable to recover it. 00:33:22.021 [2024-07-26 18:33:47.900675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.900702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.900886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.900916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.901091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.901122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.901286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.901313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.901438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.901486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.901652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.901678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.901870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.901897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.902085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.902115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.902292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.902322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.902504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.902530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.902700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.902727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.902912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.902941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.903093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.903122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.903323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.903352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.903507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.903541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.903727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.903754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.903920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.903968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.904130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.904160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.904341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.904368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.904570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.904600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.904770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.904799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.905006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.905032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.905231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.905258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.905437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.905467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.905646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.905673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.905817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.905844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.906013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.906080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.906291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.906317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.906524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.906569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.906782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.906813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.907022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.907049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.907204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.022 [2024-07-26 18:33:47.907231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.022 qpair failed and we were unable to recover it. 00:33:22.022 [2024-07-26 18:33:47.907402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.907430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.907607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.907634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.907872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.907924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.908109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.908153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.908343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.908370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.908533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.908561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.908774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.908805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.908975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.909002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.909162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.909203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.909398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.909435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.909595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.909623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.909787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.909815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.910028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.910057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.910235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.910262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.910398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.910425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.910592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.910619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.910749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.910775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.910912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.910940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.911107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.911135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.911280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.911307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.911468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.911512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.911714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.911743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.911932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.911959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.912198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.912239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.912412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.912440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.912613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.023 [2024-07-26 18:33:47.912640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.023 qpair failed and we were unable to recover it. 00:33:22.023 [2024-07-26 18:33:47.912803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.912846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.913026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.913055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.913275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.913302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.913526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.913579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.913795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.913825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.915047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.915090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.915303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.915330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.915543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.915570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.915744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.915771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.915948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.915978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.916189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.916222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.916371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.916400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.916646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.916708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.916893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.916922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.917089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.917117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.917248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.917274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.917439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.917466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.917627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.917654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.917862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.917892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.918074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.918106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.918309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.918336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.918556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.918583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.918717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.918744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.918903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.918930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.919134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.919162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.919427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.919457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.919667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.919694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.919841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.919868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.024 [2024-07-26 18:33:47.920001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.024 [2024-07-26 18:33:47.920028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.024 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.920225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.920252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.920418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.920463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.920669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.920698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.920864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.920890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.921020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.921047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.921255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.921282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.921418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.921445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.921584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.921611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.921820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.921847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.922014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.922041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.922293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.922320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.922511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.922542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.922695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.922723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.922897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.922928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.923133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.923163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.923318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.923345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.923558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.923587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.923743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.923773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.923964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.923991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.924205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.924236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.924450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.924480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.924667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.924697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.924849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.924891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.925034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.925068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.925233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.925260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.925468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.925497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.925665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.925693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.925903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.925930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.926127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.926154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.926307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.926332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.025 qpair failed and we were unable to recover it. 00:33:22.025 [2024-07-26 18:33:47.926506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.025 [2024-07-26 18:33:47.926532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.926672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.926698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.926900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.926929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.927139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.927166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.927327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.927372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.927550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.927581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.927776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.927804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.927991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.928022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.928194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.928221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.928409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.928435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.928627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.928656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.928855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.928881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.929045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.929082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.929284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.929321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.929541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.929570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.929728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.929754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.929966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.929994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.930207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.930233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.930400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.930427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.930627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.930656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.930861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.930890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.931079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.931117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.931250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.931277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.931491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.931520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.931726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.931752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.931937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.931967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.932140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.932167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.932335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.932361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.932522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.932552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.932735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.932764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.932973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.932999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.933196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.933227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.933396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.933429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.933623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.026 [2024-07-26 18:33:47.933651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.026 qpair failed and we were unable to recover it. 00:33:22.026 [2024-07-26 18:33:47.933832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.933861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.934051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.934083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.934260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.934293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.934507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.934536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.934737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.934768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.934955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.934983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.935147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.935174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.935322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.935372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.935566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.935594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.935734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.935761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.935951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.935989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.936161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.936197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.936383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.936409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.936577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.936604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.936741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.936768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.936908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.936933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.937084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.937112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.937301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.937328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.937511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.937540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.937726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.937755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.937958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.937985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.938150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.938176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.938313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.938355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.938505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.938531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.938758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.938825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.938997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.939024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.939221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.939248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.939433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.939461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.939621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.939651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.939863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.939890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.940082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.940136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.940302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.940338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.940556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.027 [2024-07-26 18:33:47.940581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.027 qpair failed and we were unable to recover it. 00:33:22.027 [2024-07-26 18:33:47.940794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.940847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.941022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.941051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.941232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.941260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.941415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.941442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.941607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.941637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.941800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.941826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.942114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.942160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.942349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.942392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.942545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.942573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.942801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.942855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.943036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.943080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.943257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.943284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.943511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.943556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.943761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.943789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.943950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.943976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.944165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.944192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.944336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.944365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.944547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.944573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.944842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.944888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.945049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.945089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.945279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.945315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.945591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.945646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.945843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.945869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.946011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.946038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.946246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.946275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.946423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.946450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.946618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.946645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.946809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.946836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.946975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.947004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.947194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.947222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.947391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.947418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.947604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.947638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.948495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.948530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.028 [2024-07-26 18:33:47.948723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.028 [2024-07-26 18:33:47.948767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.028 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.948979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.949008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.949218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.949245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.949410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.949436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.949725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.949774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.949979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.950008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.950176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.950203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.950382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.950408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.950666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.950697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.950856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.950882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.951038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.951071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.951236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.951264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.951559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.951587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.951829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.951858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.952042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.952075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.952242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.952267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.952472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.952502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.952701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.952729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.952891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.952918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.953054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.953113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.953351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.953379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.953586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.953619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.953821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.953847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.954047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.954079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.954252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.954281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.954496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.954525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.954787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.954834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.955049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.955083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.955279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.955308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.955481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.955510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.955715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.955744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.955962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.955989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.956158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.956184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.029 [2024-07-26 18:33:47.956371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-26 18:33:47.956400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.029 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.956575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.956608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.956854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.956883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.957088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.957117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.957320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.957348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.957562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.957596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.957780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.957808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.957993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.958019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.958221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.958250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.958477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.958505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.958722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.958752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.958962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.958988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.959168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.959197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.959415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.959444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.959631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.959659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.959842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.959869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.960177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.960206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.960452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.960482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.960710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.960739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.960930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.960957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.961138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.961170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.961376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.961405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.961565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.961593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.961751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.961780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.961964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.962003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.962228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.962267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.962501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.962547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.030 [2024-07-26 18:33:47.962742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-26 18:33:47.962786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.030 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.962922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.962948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.963122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.963159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.963327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.963370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.963583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.963628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.963866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.963910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.964102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.964131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.964294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.964320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.964543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.964571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.964758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.964787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.964945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.964975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.965138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.965164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.965348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.965382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.965560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.965588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.965763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.965791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.965969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.965997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.966196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.966222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.966384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.966409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.966605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.966633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.966844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.966872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.967026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.967051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.967190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.967216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.967400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.967428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.967595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.967637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.967819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.967847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.968014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.968039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.968190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.968215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.968371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.968398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.968576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.968603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.968811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.968841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.968989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.969018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.969208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.969234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.969432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.969462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.969609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-26 18:33:47.969636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.031 qpair failed and we were unable to recover it. 00:33:22.031 [2024-07-26 18:33:47.969794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.969823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.969995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.970023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.970223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.970248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.970403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.970431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.970594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.970619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.970838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.970866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.971065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.971109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.971241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.971266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.971404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.971429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.971599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.971627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.971855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.971883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.972070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.972116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.972260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.972286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.972443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.972472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.972648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.972676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.972847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.972875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.973101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.973142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.973288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.973316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.973540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.973584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.973800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.973843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.973985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.974012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.974190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.974217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.974441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.974486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.974682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.974712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.974889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.974915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.975126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.975158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.975349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.975397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.975615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.975659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.975796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.975824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.975992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.976019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.976213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.976257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.976485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.976529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.976696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.976740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.032 [2024-07-26 18:33:47.976931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.032 [2024-07-26 18:33:47.976957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.032 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.977143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.977187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.977400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.977444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.977638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.977683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.977859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.977884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.978031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.978065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.978263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.978308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.978514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.978557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.978717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.978761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.978897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.978924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.979098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.979136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.979326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.979369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.979535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.979578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.979757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.979800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.979965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.979992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.980158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.980202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.980381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.980425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.980618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.980666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.980809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.980834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.980989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.981028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.981231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.981261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.981473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.981502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.981682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.981710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.981891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.981919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.982110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.982136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.982274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.982299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.982517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.982546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.982699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.982727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.982875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.982902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.983105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.983131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.983300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.983342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.983556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.983584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.983803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.983831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.983993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.984022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.033 qpair failed and we were unable to recover it. 00:33:22.033 [2024-07-26 18:33:47.984207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.033 [2024-07-26 18:33:47.984233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.984423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.984451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.984623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.984651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.984857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.984884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.985070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.985099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.985261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.985287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.985451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.985479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.985642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.985670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.985831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.985858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.986030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.986055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.986226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.986251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.986423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.986467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.986686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.986742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.986926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.986955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.987136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.987162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.987321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.987346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.987527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.987579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.987726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.987753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.987905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.987934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.988154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.988180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.988332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.988375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.988552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.988577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.988761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.988788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.988947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.988975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.989135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.989161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.989649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.989680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.989834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.989863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.990025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.990050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.990541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.990572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.990756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.990785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.990961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.990989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.991193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.991219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.991410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.991438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.991602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.991626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.034 qpair failed and we were unable to recover it. 00:33:22.034 [2024-07-26 18:33:47.991767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.034 [2024-07-26 18:33:47.991793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.991983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.992011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.992176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.992201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.992347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.992372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.992537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.992562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.992724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.992756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.992926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.992954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.993144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.993171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.993332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.993357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.993512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.993539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.993708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.993736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.993933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.993961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.994163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.994189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.994345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.994387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.994631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.994658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.994803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.994831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.995014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.995042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.995230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.995255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.995421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.995450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.995634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.995662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.995839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.995867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.996052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.996087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.996259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.996284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.996476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.996501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.996668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.996696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.996880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.996910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.997071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.997097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.997268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.997293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.997481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.035 [2024-07-26 18:33:47.997509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.035 qpair failed and we were unable to recover it. 00:33:22.035 [2024-07-26 18:33:47.997664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:47.997689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:47.997835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:47.997862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:47.998046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:47.998086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:47.998275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:47.998300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:47.998466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:47.998495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:47.998709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:47.998734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:47.998868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:47.998893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:47.999080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:47.999107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:47.999297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:47.999325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:47.999510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:47.999534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:47.999697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:47.999725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:47.999931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:47.999959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.000117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.000142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.000280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.000305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.000528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.000556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.000719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.000744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.000926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.000954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.001112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.001148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.001332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.001357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.001495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.001521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.001685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.001728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.001905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.001930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.002142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.002171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.002320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.002347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.002517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.002542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.002716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.002741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.002884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.002909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.003045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.003075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.003257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.003284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.003440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.003465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.003591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.003616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.003760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.003803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.003963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.003991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.036 [2024-07-26 18:33:48.004178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.036 [2024-07-26 18:33:48.004203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.036 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.004395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.004423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.004602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.004630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.004787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.004812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.004978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.005021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.005197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.005223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.005358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.005383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.005523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.005549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.005753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.005781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.005935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.005959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.006132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.006162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.006347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.006376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.006563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.006588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.006755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.006783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.006960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.006987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.007183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.007208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.007354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.007379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.007511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.007536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.007680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.007705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.007887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.007915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.008067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.008095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.008273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.008298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.008495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.008520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.008659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.008701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.008864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.008889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.009027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.009053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.009269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.009299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.009466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.009491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.009646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.009674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.009822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.009850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.009996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.010021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.010198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.010224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.010361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.010386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.010539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.010564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.037 qpair failed and we were unable to recover it. 00:33:22.037 [2024-07-26 18:33:48.010748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.037 [2024-07-26 18:33:48.010776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.010954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.010982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.011157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.011183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.011334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.011362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.011573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.011599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.011765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.011790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.011980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.012005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.012140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.012165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.012352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.012377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.012561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.012589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.012766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.012795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.012960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.012985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.013148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.013174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.013349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.013377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.013535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.013560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.013741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.013769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.013968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.013996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.014193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.014219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.014431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.014464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.014641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.014669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.014828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.014853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.015055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.015090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.015273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.015298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.015489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.015514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.015734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.015759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.015940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.015968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.016186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.016212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.016401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.016429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.016579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.016607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.016782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.016807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.016960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.016988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.017172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.017201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.017376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.017402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.017567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.038 [2024-07-26 18:33:48.017595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.038 qpair failed and we were unable to recover it. 00:33:22.038 [2024-07-26 18:33:48.017798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.017822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.018022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.018047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.018215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.018244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.018412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.018440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.018650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.018679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.018823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.018848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.019074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.019114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.019299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.019325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.019514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.019543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.019745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.019773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.019924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.019949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.020143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.020171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.020334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.020362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.020551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.020576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.020741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.020766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.020970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.020998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.021208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.021234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.021382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.021407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.021548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.021588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.021808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.021832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.021992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.022020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.022179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.022204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.022338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.022363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.022503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.022546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.022705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.022733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.022921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.022946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.023089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.023118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.023294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.023322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.023493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.023517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.023691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.023719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.023864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.023892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.024085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.024111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.024288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.024316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.024483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.039 [2024-07-26 18:33:48.024511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.039 qpair failed and we were unable to recover it. 00:33:22.039 [2024-07-26 18:33:48.024675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.024700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.024844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.024869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.025030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.025086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.025296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.025322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.025475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.025503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.025660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.025689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.025894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.025923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.026143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.026168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.026309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.026334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.026504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.026529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.026740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.026768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.026909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.026937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.027094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.027119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.027329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.027357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.027531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.027559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.027767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.027791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.027956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.027981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.028146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.028172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.028305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.028334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.028528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.028556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.028739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.028764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.028954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.028979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.029148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.029177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.029351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.029379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.029538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.029563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.029708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.029734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.029920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.029944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.030128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.030154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.030289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.040 [2024-07-26 18:33:48.030315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.040 qpair failed and we were unable to recover it. 00:33:22.040 [2024-07-26 18:33:48.030477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.030502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.030667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.030692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.030880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.030905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.031047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.031077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.031237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.031262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.031465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.031490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.031678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.031703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.031865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.031891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.032076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.032102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.032241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.032266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.032403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.032428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.032589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.032614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.032775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.032800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.033004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.033032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.033198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.033224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.033403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.033430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.033612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.033637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.033810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.033835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.034008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.034036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.034204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.034232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.034379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.034405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.034537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.034563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.034727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.034752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.034901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.034929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.035083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.035112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.035293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.035318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.035458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.035483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.035654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.035679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.035867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.035892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.041 qpair failed and we were unable to recover it. 00:33:22.041 [2024-07-26 18:33:48.036080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.041 [2024-07-26 18:33:48.036108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.036255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.036283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.036472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.036497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.036685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.036711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.036934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.036962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.037122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.037148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.037315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.037341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.037513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.037541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.037689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.037714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.037898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.037926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.038080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.038109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.038294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.038319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.038461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.038486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.038662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.038687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.038816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.038841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.039031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.039064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.039243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.039268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.039435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.039460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.039648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.039676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.039830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.039859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.040073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.040100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.040291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.040318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.040465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.040493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.040677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.040702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.040872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.040898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.041087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.041113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.041242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.041267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.041472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.041499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.041649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.041684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.041846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.041870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.042065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.042091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.042225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.042250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.042441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.042466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.042 [2024-07-26 18:33:48.042598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.042 [2024-07-26 18:33:48.042623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.042 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.042758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.042782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.042939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.042964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.043163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.043192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.043379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.043404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.043592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.043617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.043775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.043804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.043986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.044013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.044178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.044204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.044395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.044423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.044611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.044636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.044796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.044821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.044978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.045005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.045187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.045213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.045376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.045402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.045560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.045585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.045752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.045777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.045917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.045942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.046126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.046154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.046357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.046381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.046534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.046559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.046713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.046742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.046930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.046958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.047146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.047171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.047338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.047364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.047494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.047519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.047720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.047745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.047910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.047936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.048148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.048176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.048353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.048378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.048555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.048583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.048796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.048821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.048958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.048983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.049150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.049175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.043 [2024-07-26 18:33:48.049303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.043 [2024-07-26 18:33:48.049328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.043 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.049516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.049541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.049749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.049782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.049968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.049996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.050163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.050188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.050316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.050341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.050532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.050560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.050737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.050762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.050952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.050980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.051195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.051220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.051378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.051402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.051614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.051642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.051845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.051873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.052056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.052086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.052233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.052259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.052393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.052418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.052590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.052615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.052822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.052850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.053029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.053057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.053252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.053277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.053456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.053485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.053658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.053686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.053846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.053871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.054057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.054093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.054239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.054267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.054428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.054453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.054596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.054622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.054837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.054864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.055052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.055083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.055294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.055326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.055469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.055497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.055673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.055699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.055879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.055907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.056101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.056126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.044 [2024-07-26 18:33:48.056283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.044 [2024-07-26 18:33:48.056308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.044 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.056490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.056518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.056697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.056725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.056911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.056936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.057128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.057154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.057332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.057360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.057538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.057562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.057727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.057753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.057910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.057950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.058134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.058159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.058369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.058397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.058564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.058588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.058750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.058775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.058954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.058982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.059171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.059197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.059355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.059380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.059561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.059590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.059764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.059792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.060004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.060029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.060225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.060254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.060421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.060446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.060583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.060609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.060790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.060818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.060967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.060995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.061178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.061203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.061346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.061371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.061532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.061557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.061745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.061771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.061978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.062006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.062160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.062185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.062356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.062381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.062560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.062588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.062770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.062798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.062987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.063011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.045 [2024-07-26 18:33:48.063210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.045 [2024-07-26 18:33:48.063238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.045 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.063441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.063469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.063625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.063654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.063844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.063870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.064074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.064102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.064254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.064279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.064492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.064520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.064671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.064699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.064878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.064903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.065084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.065113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.065313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.065341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.065517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.065542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.065720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.065748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.065918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.065946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.066121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.066147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.066329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.066357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.066565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.066593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.066794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.066820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.067032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.067065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.067298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.067326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.067506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.067530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.067703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.067731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.067905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.067934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.068143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.068169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.068333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.068375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.068522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.068550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.068710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.068735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.068924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.068949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.069148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.069177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.069354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.069383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.069547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.069572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.069770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.069813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.070003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.070028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.070195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.070221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.046 qpair failed and we were unable to recover it. 00:33:22.046 [2024-07-26 18:33:48.070415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.046 [2024-07-26 18:33:48.070443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.070620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.070644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.070833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.070858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.071069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.071098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.071258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.071282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.071451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.071476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.071614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.071639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.071775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.071800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.071996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.072021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.072252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.072278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.072422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.072446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.072584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.072609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.072769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.072809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.072967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.072992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.073153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.073194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.073375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.073400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.073559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.073584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.073724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.073749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.073963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.073992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.074146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.074171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.074314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.074339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.047 [2024-07-26 18:33:48.074472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.047 [2024-07-26 18:33:48.074497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.047 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.074658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.074683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.074830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.074856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.075065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.075093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.075254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.075279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.075447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.075488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.075658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.075686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.075862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.075887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.076052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.076083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.076257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.076285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.076473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.076498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.076658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.076683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.076842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.076867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.077030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.077055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.077203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.077228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.077431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.077464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.077645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.077670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.077830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.077855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.078013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.078054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.078223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.078248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.078380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.078405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.078595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.078623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.078811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.078836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.078973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.078998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.079155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.079181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.079348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.079373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.079511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.079536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.079703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.079728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.079889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.079917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.080070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.080113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.080245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.080270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.080454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.080478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.080697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.080725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.048 qpair failed and we were unable to recover it. 00:33:22.048 [2024-07-26 18:33:48.080876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-26 18:33:48.080904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.081091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.081116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.081272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.081300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.081484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.081512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.081695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.081720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.081900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.081928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.082105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.082134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.082341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.082366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.082547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.082575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.082753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.082782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.082940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.082966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.083173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.083202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.083377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.083405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.083625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.083650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.083810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.083838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.084050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.084083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.084239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.084264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.084442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.084470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.084615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.084643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.084852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.084877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.085032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.085075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.085253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.085281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.085459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.085484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.085675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.085704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.085909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.085937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.086120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.086146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.086330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.086358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.086528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.086555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.086716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.086741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.086907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.086932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.087115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.087140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.087331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.087356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.087540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.087567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.087738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-26 18:33:48.087766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.049 qpair failed and we were unable to recover it. 00:33:22.049 [2024-07-26 18:33:48.087982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.088008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.088154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.088180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.088361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.088389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.088584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.088609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.088762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.088790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.088965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.088993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.089177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.089203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.089366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.089391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.089523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.089548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.089722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.089746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.089911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.089937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.090094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.090120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.090261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.090286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.090437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.090465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.090649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.090674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.090861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.090886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.091022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.091051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.091245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.091271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.091470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.091494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.091705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.091733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.091907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.091935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.092140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.092166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.092351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.092380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.092551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.092579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.092767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.092792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.093001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.093029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.093228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.093254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.093450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.093476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.093691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.093719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.093897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.093925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.094118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.094145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.094321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.094349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.094514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.094539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.094695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-26 18:33:48.094720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.050 qpair failed and we were unable to recover it. 00:33:22.050 [2024-07-26 18:33:48.094872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.094900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.095083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.095113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.095296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.095322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.095502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.095530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.095682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.095710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.095910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.095938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.096131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.096157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.096295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.096320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.096513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.096538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.096708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.096734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.096924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.096952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.097137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.097163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.097343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.097372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.097559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.097587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.097760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.097785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.097965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.097993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.098197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.098226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.098411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.098435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.098621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.098648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.098798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.098826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.099016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.099041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.099182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.099207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.099361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.099385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.099549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.099575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.099761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.099790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.099995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.100022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.100174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.100200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.100344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.100387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.100559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.100587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.100780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.100805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.100966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.100994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.101198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.101227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.101439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.101464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.101674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.051 [2024-07-26 18:33:48.101702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.051 qpair failed and we were unable to recover it. 00:33:22.051 [2024-07-26 18:33:48.101871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.101899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.102086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.102112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.102322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.102350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.102509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.102538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.102750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.102775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.102957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.102985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.103131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.103160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.103367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.103392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.103574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.103602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.103776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.103804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.103952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.103977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.104137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.104180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.104364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.104389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.104554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.104579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.104745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.104771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.104929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.104957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.105167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.105196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.105335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.105360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.105521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.105562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.105747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.105771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.105941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.105966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.106125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.106151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.106303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.106328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.106465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.106490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.106650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.106676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.106802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.106827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.107007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.107035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.107202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.052 [2024-07-26 18:33:48.107228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.052 qpair failed and we were unable to recover it. 00:33:22.052 [2024-07-26 18:33:48.107366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.107391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.107558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.107583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.107763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.107791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.107969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.107994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.108200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.108229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.108387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.108414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.108570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.108596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.108778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.108807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.108983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.109011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.109204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.109230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.109399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.109424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.109631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.109659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.109814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.109839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.110048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.110084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.110246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.110271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.110429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.110454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.110602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.110627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.110816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.110841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.111107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.111150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.111323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.111364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.111540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.111567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.111751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.111775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.111960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.111988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.112172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.112200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.112375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.112400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.112571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.112599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.112748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.112776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.112933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.112958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.113122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.113164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.113341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.113373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.113558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.113583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.113761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.113789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.113940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.113968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.053 [2024-07-26 18:33:48.114151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.053 [2024-07-26 18:33:48.114176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.053 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.114314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.114355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.114544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.114568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.114723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.114748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.114940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.114969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.115114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.115143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.115329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.115354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.115483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.115508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.115712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.115740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.115898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.115922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.116126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.116155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.116329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.116357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.116542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.116567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.116775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.116803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.116975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.117003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.117189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.117215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.117381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.117406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.117614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.117642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.117828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.117853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.118018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.118067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.118253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.118278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.118462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.118487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.118640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.118667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.118868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.118901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.119082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.119108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.119263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.119292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.119432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.119460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.119678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.119702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.119865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.119890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.120049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.120079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.120243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.120268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.120423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.120451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.120618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.120645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.120824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.120849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.054 [2024-07-26 18:33:48.121025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.054 [2024-07-26 18:33:48.121053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.054 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.121222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.121250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.121401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.121426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.121575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.121618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.121830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.121857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.122014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.122039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.122217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.122242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.122376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.122401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.122534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.122559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.122750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.122779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.122933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.122960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.123138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.123164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.123338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.123381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.123562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.123587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.123739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.123764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.123950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.123977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.124162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.124191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.124357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.124382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.124570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.124595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.124803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.124831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.124986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.125011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.125178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.125204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.125385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.125412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.125618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.125643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.125827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.125855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.126005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.126033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.126192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.126217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.126398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.126426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.126576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.126604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.126792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.126817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.126997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.127029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.127217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.127242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.128275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.128307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.128497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.128526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.055 [2024-07-26 18:33:48.128682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.055 [2024-07-26 18:33:48.128711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.055 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.128917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.128941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.129126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.129155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.129324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.129352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.129535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.129560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.129698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.129724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.129927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.129954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.130123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.130148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.130331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.130360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.130534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.130561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.130755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.130780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.130959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.130988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.131178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.131203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.131366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.131391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.131523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.131548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.131682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.131707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.131888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.131916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.132141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.132167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.132307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.132350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.132540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.132565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.132749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.132774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.132909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.132950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.133114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.133141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.133331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.133366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.133533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.133560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.133735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.133760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.133941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.133969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.134150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.134179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.134342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.134367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.134527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.134569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.134724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.134751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.134909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.134934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.135142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.135171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.135381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.135406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.056 [2024-07-26 18:33:48.135593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.056 [2024-07-26 18:33:48.135618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.056 qpair failed and we were unable to recover it. 00:33:22.057 [2024-07-26 18:33:48.135797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.057 [2024-07-26 18:33:48.135825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.057 qpair failed and we were unable to recover it. 00:33:22.057 [2024-07-26 18:33:48.136000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.057 [2024-07-26 18:33:48.136028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.057 qpair failed and we were unable to recover it. 00:33:22.057 [2024-07-26 18:33:48.136192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.057 [2024-07-26 18:33:48.136217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.057 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.136398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.136426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.136611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.136636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.136821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.136846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.137036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.137081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.137264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.137292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.137448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.137473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.137678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.137706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.137886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.137913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.138097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.138123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.138264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.138289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.138433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.138459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.138595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.138620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.138804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.138832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.139028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.139056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.139267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.139292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.139462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.139488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.139678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.139707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.139864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.139888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.140047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.140095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.140258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.140283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.140419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.140444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.140579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.140604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.140832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.140860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.141042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.141072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.141296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.141322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.141481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.141505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.339 qpair failed and we were unable to recover it. 00:33:22.339 [2024-07-26 18:33:48.141661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.339 [2024-07-26 18:33:48.141690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.141842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.141870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.142072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.142101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.142289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.142314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.142453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.142478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.142637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.142678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.142826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.142852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.143021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.143046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.143224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.143252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.143412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.143437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.143648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.143676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.143828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.143856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.144070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.144096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.144257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.144285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.144504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.144532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.144714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.144739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.144879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.144904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.145112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.145141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.145293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.145317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.145503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.145531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.145713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.145741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.145951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.145979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.146169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.146195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.146331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.146356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.146488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.146513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.146639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.146664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.146872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.146900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.147079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.147109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.147300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.147328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.147508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.340 [2024-07-26 18:33:48.147536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.340 qpair failed and we were unable to recover it. 00:33:22.340 [2024-07-26 18:33:48.147690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.147714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.147926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.147954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.148092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.148120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.148273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.148298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.148447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.148489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.148671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.148695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.148862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.148887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.149099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.149127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.149304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.149331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.149511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.149536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.149697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.149722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.149932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.149960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.150180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.150206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.150429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.150456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.150635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.150663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.150813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.150838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.151047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.151080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.151264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.151289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.151428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.151452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.151631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.151659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.151837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.151864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.152053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.152082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.152267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.152296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.152479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.152504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.152638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.152664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.152879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.152907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.153103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.153129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.153265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.341 [2024-07-26 18:33:48.153290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.341 qpair failed and we were unable to recover it. 00:33:22.341 [2024-07-26 18:33:48.153447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.153489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.153664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.153692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.153872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.153897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.154093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.154135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.154319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.154361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.154511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.154535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.154712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.154740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.154954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.154978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.155172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.155197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.155357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.155385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.155588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.155620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.155776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.155801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.155971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.155996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.156131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.156156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.156310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.156335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.156522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.156549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.156726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.156754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.156955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.156980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.157141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.157170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.157375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.157400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.157593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.157618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.157771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.157799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.157987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.158012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.158202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.158228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.158384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.158412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.158618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.158646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.158792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.158817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.158993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.159020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.159206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.159231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.342 [2024-07-26 18:33:48.159390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.342 [2024-07-26 18:33:48.159414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.342 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.159576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.159601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.159760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.159785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.159954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.159979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.160154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.160180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.160321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.160362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.160547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.160572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.160736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.160761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.160937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.160965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.161160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.161186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.161318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.161343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.161493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.161521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.161701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.161726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.161885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.161910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.162069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.162110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.162273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.162298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.162458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.162499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.162691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.162716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.162901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.162926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.163111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.163139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.163320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.163348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.163504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.163529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.163724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.163749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.163935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.163962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.164168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.164194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.164341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.164366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.164526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.164551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.164705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.164730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.164915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.164943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.343 [2024-07-26 18:33:48.165118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.343 [2024-07-26 18:33:48.165146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.343 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.165328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.165353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.165538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.165566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.165744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.165772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.165960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.165985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.166147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.166173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.166340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.166368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.166556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.166582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.166749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.166792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.166973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.166998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.167164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.167190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.167364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.167392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.167567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.167592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.167732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.167757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.167976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.168006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.168221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.168247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.168417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.168442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.168651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.168680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.168855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.168882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.169031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.169056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.169248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.169283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.169441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.169470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.169675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.169700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.169876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.169904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.170106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.170135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.170298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.170323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.170485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.170511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.170661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.170688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.170873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.170899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.344 qpair failed and we were unable to recover it. 00:33:22.344 [2024-07-26 18:33:48.171082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.344 [2024-07-26 18:33:48.171120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.171287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.171312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.171495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.171520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.171661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.171686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.171846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.171872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.172019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.172044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.172199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.172227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.172416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.172444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.172655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.172680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.172883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.172911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.173117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.173146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.173308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.173334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.173543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.173572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.173722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.173750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.173939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.173964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.174102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.174129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.174314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.174342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.174492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.174517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.174724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.174752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.174933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.174961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.175144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.175169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.175354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.175382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.345 qpair failed and we were unable to recover it. 00:33:22.345 [2024-07-26 18:33:48.175561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.345 [2024-07-26 18:33:48.175589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.175774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.175799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.175981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.176009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.176200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.176225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.176355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.176381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.176544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.176569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.176724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.176765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.177010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.177038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.177233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.177259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.177446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.177474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.177655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.177683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.177866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.177894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.178078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.178107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.178287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.178312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.178481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.178507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.178714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.178743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.178904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.178928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.179119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.179145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.179337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.179364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.179519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.179544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.179749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.179778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.179930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.179958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.180129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.180154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.180281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.180323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.180503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.180532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.180718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.180743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.180922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.180950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.181123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.181151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.181356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.181381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.181565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.181593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.181774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.346 [2024-07-26 18:33:48.181799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.346 qpair failed and we were unable to recover it. 00:33:22.346 [2024-07-26 18:33:48.181963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.181988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.182176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.182201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.182341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.182383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.182553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.182578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.182792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.182819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.182961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.182988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.183165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.183198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.183394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.183419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.183585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.183610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.183776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.183802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.183989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.184017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.184178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.184203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.184415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.184443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.184609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.184635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.184828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.184853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.185049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.185084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.185241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.185266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.185421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.185446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.185602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.185627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.185759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.185800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.185988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.186013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.186202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.186231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.186368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.186396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.186553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.186577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.186765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.186793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.186941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.186969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.187153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.187178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.187339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.187367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.187520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.347 [2024-07-26 18:33:48.187549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.347 qpair failed and we were unable to recover it. 00:33:22.347 [2024-07-26 18:33:48.187709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.187734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.187874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.187899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.188055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.188088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.188300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.188325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.188537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.188565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.188720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.188750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.188911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.188936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.189080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.189124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.189335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.189360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.189522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.189547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.189685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.189711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.189876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.189918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.190093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.190119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.190257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.190282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.190441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.190483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.190637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.190662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.190840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.190869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.191042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.191077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.191250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.191279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.191470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.191495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.191662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.191688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.191848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.191873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.192025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.192053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.192234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.192262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.192445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.192470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.192605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.192630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.192765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.192790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.192958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.192983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.193171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.193200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.193379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.193408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.348 [2024-07-26 18:33:48.193597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.348 [2024-07-26 18:33:48.193622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.348 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.193760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.193785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.193921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.193946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.194134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.194159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.194291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.194317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.194475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.194500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.194661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.194686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.194825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.194868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.195042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.195076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.195229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.195254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.195431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.195459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.195645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.195670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.195835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.195861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.196044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.196078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.196260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.196285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.196416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.196445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.196674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.196732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.196886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.196913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.197118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.197144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.197305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.197334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.197507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.197534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.197689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.197714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.197851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.197877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.198088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.198117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.198274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.198300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.198523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.198574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.198777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.198805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.198984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.199009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.199222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.199250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.199416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.199442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.349 [2024-07-26 18:33:48.199598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.349 [2024-07-26 18:33:48.199623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.349 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.199786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.199813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.199994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.200022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.200216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.200241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.200429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.200453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.200658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.200683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.200850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.200890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.201075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.201100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.201304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.201331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.201478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.201506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.201682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.201709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.201883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.201910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.202128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.202153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.202318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.202342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.202515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.202543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.202693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.202731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.202886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.202911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.203051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.203100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.203242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.203270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.203432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.203457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.203616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.203657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.203844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.203869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.204027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.204052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.204213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.204241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.204401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.204426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.204555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.204580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.204720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.204748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.204891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.204916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.205118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.205144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.350 [2024-07-26 18:33:48.205309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.350 [2024-07-26 18:33:48.205351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.350 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.205495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.205523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.205732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.205757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.205973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.205998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.206164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.206190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.206351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.206376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.206509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.206535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.206699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.206724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.206885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.206910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.207106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.207131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.207309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.207337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.207552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.207577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.207764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.207793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.207942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.207970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.208151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.208177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.208361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.208450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.208635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.208663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.208819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.208844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.208985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.209010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.209145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.209171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.209346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.209371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.209545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.209570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.209757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.209786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.209943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.209968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.210146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.210181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.210345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.210370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.210540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.351 [2024-07-26 18:33:48.210565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.351 qpair failed and we were unable to recover it. 00:33:22.351 [2024-07-26 18:33:48.210709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.210735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.210924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.210954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.211134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.211160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.211346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.211374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.211546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.211574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.211752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.211777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.211959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.211987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.212173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.212199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.212334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.212360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.212578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.212636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.212827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.212853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.213018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.213043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.213193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.213219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.213383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.213426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.213610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.213634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.213829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.213885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.214027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.214055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.214229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.214255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.214387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.214429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.214569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.214597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.214751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.214776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.214917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.214943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.215074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.215100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.215264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.215288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.215494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.215522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.215685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.215710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.215838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.215863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.216001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.216028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.216223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.216252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.216412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.352 [2024-07-26 18:33:48.216437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.352 qpair failed and we were unable to recover it. 00:33:22.352 [2024-07-26 18:33:48.216578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.216604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.216761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.216804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.216965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.216989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.217157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.217183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.217360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.217385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.217547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.217571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.217732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.217803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.217972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.218000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.218182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.218211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.218455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.218507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.218675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.218702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.218884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.218912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.219069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.219112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.219276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.219302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.219475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.219501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.219736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.219800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.219975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.220003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.220162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.220188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.220326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.220351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.220544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.220572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.220736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.220761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.220899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.220924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.221102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.221128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.221261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.221286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.221449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.221493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.221639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.221667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.221882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.221906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.222070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.222099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.222291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.222317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.222477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.222502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.353 [2024-07-26 18:33:48.222657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.353 [2024-07-26 18:33:48.222685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.353 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.222830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.222858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.223035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.223065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.223235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.223263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.223454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.223479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.223641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.223667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.223864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.223892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.224076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.224105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.224258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.224283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.224456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.224484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.224658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.224686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.224835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.224860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.225025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.225087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.225248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.225276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.225422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.225447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.225622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.225647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.225822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.225850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.226033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.226065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.226262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.226290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.226473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.226501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.226649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.226674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.226856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.226884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.227033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.227068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.227227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.227252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.227416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.227442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.227608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.227633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.227786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.227810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.227953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.227978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.228139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.228165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.228296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.228321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.228533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.228561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.228714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.354 [2024-07-26 18:33:48.228743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.354 qpair failed and we were unable to recover it. 00:33:22.354 [2024-07-26 18:33:48.228954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.228979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.229154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.229180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.229362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.229389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.229573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.229598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.229785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.229813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.229965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.229992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.230174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.230199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.230358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.230386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.230529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.230557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.230712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.230737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.230947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.230975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.231155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.231183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.231372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.231397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.231604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.231632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.231818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.231850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.232001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.232026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.232182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.232212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.232387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.232415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.232568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.232593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.232763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.232788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.232995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.233022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.233196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.233221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.233372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.233400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.233570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.233598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.233777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.233801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.233983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.234011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.234203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.234229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.234365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.234389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.234538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.234563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.355 [2024-07-26 18:33:48.234699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.355 [2024-07-26 18:33:48.234740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.355 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.234950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.234978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.235165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.235192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.235342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.235367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.235506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.235531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.235710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.235738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.235916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.235943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.236104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.236130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.236295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.236323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.236465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.236493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.236670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.236695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.236828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.236871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.237018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.237046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.237235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.237260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.237395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.237436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.237614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.237643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.237828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.237853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.238001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.238029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.238194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.238220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.238357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.238381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.238545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.238570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.238744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.238772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.238928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.238953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.239137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.239167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.239327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.239352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.356 [2024-07-26 18:33:48.239519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.356 [2024-07-26 18:33:48.239544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.356 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.239794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.239845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.240018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.240045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.240209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.240234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.240378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.240422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.240602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.240630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.240811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.240836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.240988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.241016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.241193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.241221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.241383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.241407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.241551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.241594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.241760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.241785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.241948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.241973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.242162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.242191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.242378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.242405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.242590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.242615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.242833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.242885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.243037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.243075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.243261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.243286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.243426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.243468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.243645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.243673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.243830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.243854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.244038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.244074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.244253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.244281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.244425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.244449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.244612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.244658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.244810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.244838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.244996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.245021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.245161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.245207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.245362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.357 [2024-07-26 18:33:48.245389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.357 qpair failed and we were unable to recover it. 00:33:22.357 [2024-07-26 18:33:48.245541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.245566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.245727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.245767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.245945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.245972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.246189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.246215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.246372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.246400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.246574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.246601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.246785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.246810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.246997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.247025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.247221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.247247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.247405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.247430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.247603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.247651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.247802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.247830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.247985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.248010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.248197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.248226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.248434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.248459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.248595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.248619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.248780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.248821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.248983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.249008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.249194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.249219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.249379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.249407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.249589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.249617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.249800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.249826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.249996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.250024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.250187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.250212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.250369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.250394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.250536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.250565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.250735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.250763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.250941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.250968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.251155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.251181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.358 [2024-07-26 18:33:48.251311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.358 [2024-07-26 18:33:48.251352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.358 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.251537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.251562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.251760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.251788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.251991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.252018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.252199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.252224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.252394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.252420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.252555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.252580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.252737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.252762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.252921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.252948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.253124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.253153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.253333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.253362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.253584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.253613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.253793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.253820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.253977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.254002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.254181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.254210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.254378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.254404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.254541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.254565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.254702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.254727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.254863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.254888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.255043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.255074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.255259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.255288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.255502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.255528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.255665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.255690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.255869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.255897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.256084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.256113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.256263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.256288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.256428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.256469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.256673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.256701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.256880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.256904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.257071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.257099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.257275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.257303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.359 qpair failed and we were unable to recover it. 00:33:22.359 [2024-07-26 18:33:48.257466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.359 [2024-07-26 18:33:48.257491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.257625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.257650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.257843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.257871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.258052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.258082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.258265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.258293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.258438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.258466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.258650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.258680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.258856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.258884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.259069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.259094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.259226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.259251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.259433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.259461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.259629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.259657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.259834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.259859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.260051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.260086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.260268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.260295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.260474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.260499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.260665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.260690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.260891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.260919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.261084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.261110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.261253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.261278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.261472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.261497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.261699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.261723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.261902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.261930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.262095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.262120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.262256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.262280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.262411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.262437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.262622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.262649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.262805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.262829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.262963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.263006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.263178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.360 [2024-07-26 18:33:48.263208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.360 qpair failed and we were unable to recover it. 00:33:22.360 [2024-07-26 18:33:48.263402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.263427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.263643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.263672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.263847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.263875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.264080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.264105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.264263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.264292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.264468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.264495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.264643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.264668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.264851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.264879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.265045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.265077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.265211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.265236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.265370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.265412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.265594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.265620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.265780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.265805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.265983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.266011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.266195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.266220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.266359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.266384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.266599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.266627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.266804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.266836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.266987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.267012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.267204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.267233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.267407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.267435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.267614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.267639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.267788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.267817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.267989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.268017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.268208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.268233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.268376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.268401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.268577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.268602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.361 [2024-07-26 18:33:48.268759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.361 [2024-07-26 18:33:48.268783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.361 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.268918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.268943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.269104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.269130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.269272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.269298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.269495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.269524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.269702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.269729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.269915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.269940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.270109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.270136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.270340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.270368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.270549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.270574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.270761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.270789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.270978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.271003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.271136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.271161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.271367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.271395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.271576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.271601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.271782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.271807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.271964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.271992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.272163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.272192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.272352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.272377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.272515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.272540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.272699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.272740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.272915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.272940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.273071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.273097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.273266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.362 [2024-07-26 18:33:48.273294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.362 qpair failed and we were unable to recover it. 00:33:22.362 [2024-07-26 18:33:48.273507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.273532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.273672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.273697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.273858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.273884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.274045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.274075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.274266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.274294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.274497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.274525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.274709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.274734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.274886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.274914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.275069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.275105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.275287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.275312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.275524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.275552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.275761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.275789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.275940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.275965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.276132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.276158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.276296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.276321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.276506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.276531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.276686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.276714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.276918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.276946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.277158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.277185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.277337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.277365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.277540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.277568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.277763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.277789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.277974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.278001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.278151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.278177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.278335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.278361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.278570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.278598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.278778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.278803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.363 [2024-07-26 18:33:48.278960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.363 [2024-07-26 18:33:48.278985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.363 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.279166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.279195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.279372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.279400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.279584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.279609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.279778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.279820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.279973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.280014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.280197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.280222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.280362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.280391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.280523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.280548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.280738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.280763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.280978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.281006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.281191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.281217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.281379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.281404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.281652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.281703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.281909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.281934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.282125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.282151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.282288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.282313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.282450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.282474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.282626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.282651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.282864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.282892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.283047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.283081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.283262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.283287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.283432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.283460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.283670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.283698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.283878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.283903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.284090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.284119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.284312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.284337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.284532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.284557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.284711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.284739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.284941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.284969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.285125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.364 [2024-07-26 18:33:48.285151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.364 qpair failed and we were unable to recover it. 00:33:22.364 [2024-07-26 18:33:48.285314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.285356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.285531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.285559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.285744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.285769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.285951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.285983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.286164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.286190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.286357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.286382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.286561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.286589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.286759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.286787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.286968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.286997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.287189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.287215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.287372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.287400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.287585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.287610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.287765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.287793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.287944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.287972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.288152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.288177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.288350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.288379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.288527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.288554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.288747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.288772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.288961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.288986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.289173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.289201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.289385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.289410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.289558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.289586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.289794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.289819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.290008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.290033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.290235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.290261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.290426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.290468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.290678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.290703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.290886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.290914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.291107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.291133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.291294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.291318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.291456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.291482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.365 qpair failed and we were unable to recover it. 00:33:22.365 [2024-07-26 18:33:48.291647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.365 [2024-07-26 18:33:48.291673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.291811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.291836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.292043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.292076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.292267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.292292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.292488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.292513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.292728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.292756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.292945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.292970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.293134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.293159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.293326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.293369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.293557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.293582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.293712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.293737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.293911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.293940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.294092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.294121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.294302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.294331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.294500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.294525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.294661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.294686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.294846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.294871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.295083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.295112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.295296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.295324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.295526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.295552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.295759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.295787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.295958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.295984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.296175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.296210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.296386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.296411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.296574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.296616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.296776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.296802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.297011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.297039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.297231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.297259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.297475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.297500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.297694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.297720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.297855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.297880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.298031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.298056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.298246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.298276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.298459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.298487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.366 qpair failed and we were unable to recover it. 00:33:22.366 [2024-07-26 18:33:48.298648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.366 [2024-07-26 18:33:48.298674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.298838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.298864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.299072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.299115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.299280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.299304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.299487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.299515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.299652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.299680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.299888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.299912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.300106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.300135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.300285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.300313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.300498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.300525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.300667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.300693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.300859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.300902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.301095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.301121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.301313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.301342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.301544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.301572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.301754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.301780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.301992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.302021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.302183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.302208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.302346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.302371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.302579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.302607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.302787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.302815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.303002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.303027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.303233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.303259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.303440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.303468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.303647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.303672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.303820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.303848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.304022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.304050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.304223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.304248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.304416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.304459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.304673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.304698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.304858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.304883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.305036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.305077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.305251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.305279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.305487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.367 [2024-07-26 18:33:48.305512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.367 qpair failed and we were unable to recover it. 00:33:22.367 [2024-07-26 18:33:48.305697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.305725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.305894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.305920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.306144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.306170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.306336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.306380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.306558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.306586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.306738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.306763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.306940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.306968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.307147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.307175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.307354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.307379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.307587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.307616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.307784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.307812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.308018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.308043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.308238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.308267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.308490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.308519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.308674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.308699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.308908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.308936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.309097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.309123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.309310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.309335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.309553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.309581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.309724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.309752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.309932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.309957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.310115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.310141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.310273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.310315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.310536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.310561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.310762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.310790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.310997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.311022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.311167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.311193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.311388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.368 [2024-07-26 18:33:48.311417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.368 qpair failed and we were unable to recover it. 00:33:22.368 [2024-07-26 18:33:48.311597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.311625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.311778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.311803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.311979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.312008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.312178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.312204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.312336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.312361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.312490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.312531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.312710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.312738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.312907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.312935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.313133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.313160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.313299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.313324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.313484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.313508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.313652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.313678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.313850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.313875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.314032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.314057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.314212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.314237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.314371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.314396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.314553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.314578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.314754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.314783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.314931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.314961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.315123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.315149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.315358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.315386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.315541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.315569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.315783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.315808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.315992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.316020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.316216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.316242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.316372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.316397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.316605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.316637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.316809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.316837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.317022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.317047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.317235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.317264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.317405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.317433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.317611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.317636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.317799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.317824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.369 [2024-07-26 18:33:48.318027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.369 [2024-07-26 18:33:48.318055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.369 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.318217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.318242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.318411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.318436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.318590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.318615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.318804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.318829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.319010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.319038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.319226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.319251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.319398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.319423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.319556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.319580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.319749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.319774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.319936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.319961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.320125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.320183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.320385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.320413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.320598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.320622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.320791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.320816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.320996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.321024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.321213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.321238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.321397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.321425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.321622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.321647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.321844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.321868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.322078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.322108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.322283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.322311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.322512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.322536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.322696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.322724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.322935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.322960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.323124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.323150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.323308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.323336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.323534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.323559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.323690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.323715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.323910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.323938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.324136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.324162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.324301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.324326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.324512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.324540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.324711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.324739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.370 [2024-07-26 18:33:48.324924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.370 [2024-07-26 18:33:48.324952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.370 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.325163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.325189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.325364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.325392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.325572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.325597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.325809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.325837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.326013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.326041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.326258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.326283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.326462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.326490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.326643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.326671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.326877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.326901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.327066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.327095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.327267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.327295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.327497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.327521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.327680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.327706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.327914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.327942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.328146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.328171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.328358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.328386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.328568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.328592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.328780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.328805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.329015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.329043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.329214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.329239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.329402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.329428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.329609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.329638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.329774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.329802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.329978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.330003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.330187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.330215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.330404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.330432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.330610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.330638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.330823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.330851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.331026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.331053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.331239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.331265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.331473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.331501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.331685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.331713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.331952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.331980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.371 qpair failed and we were unable to recover it. 00:33:22.371 [2024-07-26 18:33:48.332157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.371 [2024-07-26 18:33:48.332183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.332368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.332397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.332554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.332580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.332788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.332816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.332981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.333006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.333202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.333228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.333389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.333414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.333598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.333626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.333834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.333858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.334021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.334050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.334247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.334273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.334459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.334484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.334639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.334667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.334839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.334867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.335082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.335108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.335328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.335353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.335527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.335555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.335719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.335744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.335900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.335942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.336115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.336143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.336302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.336330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.336514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.336542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.336716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.336743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.336930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.336955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.337138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.337167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.337374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.337401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.337580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.337605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.337815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.337843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.337988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.338016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.338172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.338197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.338341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.338366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.338554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.338579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.338744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.338769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.338924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.372 [2024-07-26 18:33:48.338949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.372 qpair failed and we were unable to recover it. 00:33:22.372 [2024-07-26 18:33:48.339155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.339183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.339338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.339363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.339506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.339530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.339714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.339739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.339950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.339978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.340196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.340222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.340398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.340426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.340634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.340659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.340846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.340874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.341052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.341094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.341251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.341277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.341441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.341466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.341628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.341670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.341824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.341849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.341988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.342013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.342199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.342224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.342389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.342414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.342580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.342605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.342812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.342839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.343049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.343079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.343234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.343259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.343420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.343448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.343606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.343631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.343848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.343877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.344067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.344095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.344304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.344329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.344530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.344556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.344692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.344721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.373 qpair failed and we were unable to recover it. 00:33:22.373 [2024-07-26 18:33:48.344867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.373 [2024-07-26 18:33:48.344895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.345048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.345095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.345248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.345272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.345430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.345454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.345637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.345665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.345878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.345905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.346075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.346100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.346233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.346258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.346419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.346444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.346629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.346654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.346836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.346863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.347045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.347078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.347258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.347283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.347508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.347536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.347714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.347742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.347926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.347951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.348115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.348158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.348358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.348386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.348567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.348592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.348771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.348798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.348984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.349012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.349187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.349212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.349378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.349404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.349590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.349615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.349778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.349802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.349983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.350011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.350171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.350200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.350356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.350381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.350550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.350575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.350760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.350785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.350957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.350982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.351117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.351142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.351339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.351364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.351540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.374 [2024-07-26 18:33:48.351564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.374 qpair failed and we were unable to recover it. 00:33:22.374 [2024-07-26 18:33:48.351752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.351777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.351952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.351979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.352185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.352210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.352344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.352369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.352531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.352555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.352693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.352718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.352855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.352881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.353042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.353101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.353252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.353277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.353470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.353497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.353668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.353695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.353880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.353907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.354049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.354080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.354247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.354272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.354433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.354458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.354641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.354669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.354842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.354870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.355056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.355087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.355275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.355301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.355526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.355551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.355721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.355747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.355935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.355961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.356157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.356186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.356371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.356395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.356561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.356586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.356769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.356797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.356977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.357001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.357164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.357190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.357351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.357392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.357601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.357626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.357811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.357839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.358001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.358026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.358227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.358252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.358411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.375 [2024-07-26 18:33:48.358443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.375 qpair failed and we were unable to recover it. 00:33:22.375 [2024-07-26 18:33:48.358614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.358641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.358801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.358826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.359034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.359067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.359247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.359272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.359435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.359461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.359599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.359624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.359811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.359836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.360071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.360097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.360266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.360292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.360430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.360474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.360660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.360685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.360837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.360865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.361051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.361087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.361250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.361275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.361485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.361513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.361714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.361742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.361929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.361955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.362114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.362140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.362345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.362372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.362582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.362607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.362792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.362820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.362959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.362986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.363170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.363195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.363362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.363387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.363565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.363593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.363755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.363780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.363947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.363977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.364193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.364220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.364383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.364407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.364540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.364566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.364709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.364734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.364922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.364950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.365165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.376 [2024-07-26 18:33:48.365191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.376 qpair failed and we were unable to recover it. 00:33:22.376 [2024-07-26 18:33:48.365373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.365401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.365565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.365589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.365779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.365805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.365970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.365998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.366204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.366229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.366411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.366439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.366607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.366632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.366793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.366819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.367005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.367033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.367216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.367242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.367405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.367431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.367618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.367647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.367826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.367854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.368006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.368031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.368201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.368230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.368401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.368429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.368582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.368607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.368811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.368838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.368986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.369013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.369202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.369228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.369412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.369440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.369583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.369611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.369794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.369818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.369952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.369978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.370117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.370143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.370331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.370356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.370579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.370604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.370794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.370819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.371056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.371085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.371247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.371275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.371483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.371511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.371693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.371718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.371903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.371931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.377 [2024-07-26 18:33:48.372138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.377 [2024-07-26 18:33:48.372167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.377 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.372342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.372370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.372507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.372533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.372664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.372688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.372829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.372854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.373016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.373042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.373210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.373235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.373396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.373421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.373627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.373655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.373826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.373854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.374000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.374025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.374237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.374266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.374453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.374478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.374611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.374635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.374793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.374818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.374981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.375006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.375173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.375199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.375366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.375391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.375598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.375625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.375808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.375834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.376006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.376031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.376197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.376222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.376386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.376414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.376634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.376659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.376846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.376871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.377082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.377108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.377269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.377297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.377484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.377509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.377702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.377727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.377922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.377950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.378115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.378143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.378354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.378379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.378 qpair failed and we were unable to recover it. 00:33:22.378 [2024-07-26 18:33:48.378595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.378 [2024-07-26 18:33:48.378623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.378796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.378824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.378996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.379023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.379234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.379260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.379425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.379450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.379638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.379663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.379883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.379911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.380080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.380109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.380264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.380289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.380467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.380496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.380706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.380733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.380915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.380940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.381124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.381153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.381326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.381353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.381554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.381579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.381742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.381767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.381974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.382001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.382205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.382231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.382415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.382442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.382611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.382639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.382792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.382818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.383028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.383056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.383225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.383251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.383417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.383442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.383588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.383613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.383751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.383776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.383937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.383962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.384178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.384206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.384401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.384426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.384556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.384580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.379 [2024-07-26 18:33:48.384742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.379 [2024-07-26 18:33:48.384768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.379 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.384957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.384985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.385171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.385197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.385339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.385364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.385526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.385568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.385772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.385797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.385980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.386008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.386188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.386217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.386361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.386387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.386549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.386574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.386709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.386734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.386888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.386913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.387122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.387151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.387321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.387345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.387483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.387509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.387685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.387714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.387918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.387946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.388095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.388120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.388262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.388305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.388458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.388486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.388667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.388692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.388871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.388899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.389103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.389131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.389295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.389321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.389463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.389488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.389688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.389713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.389855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.389880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.390018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.390043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.390262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.390290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.390463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.390488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.390646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.390671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.390880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.390908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.391067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.391092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.391268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.380 [2024-07-26 18:33:48.391296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.380 qpair failed and we were unable to recover it. 00:33:22.380 [2024-07-26 18:33:48.391477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.391503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.391667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.391692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.391875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.391903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.392097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.392123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.392287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.392312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.392521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.392550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.392735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.392763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.392975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.393000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.393209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.393238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.393426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.393450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.393612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.393637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.393821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.393847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.394010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.394035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.394167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.394192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.394366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.394398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.394578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.394605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.394786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.394812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.394993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.395021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.395175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.395200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.395333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.395357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.395535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.395563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.395738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.395763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.395951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.395976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.396149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.396174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.396337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.396380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.396545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.396570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.396712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.396755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.396928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.396956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.397138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.397164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.397317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.397345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.397498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.397526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.397731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.397756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.397958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.397983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.398135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.398177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.381 [2024-07-26 18:33:48.398343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.381 [2024-07-26 18:33:48.398368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.381 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.398533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.398559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.398765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.398793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.398957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.398984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.399202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.399227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.399431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.399459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.399615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.399640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.399802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.399831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.400016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.400044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.400205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.400230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.400392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.400433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.400583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.400611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.400786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.400811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.400991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.401019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.401237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.401262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.401421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.401445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.401626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.401655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.401806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.401833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.401993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.402018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.402179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.402205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.402365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.402390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.402521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.402546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.402760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.402788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.402961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.402989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.403175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.403202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.403345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.403371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.403531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.403556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.403688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.403713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.403923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.403951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.404129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.404157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.404304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.404329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.404495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.404521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.404685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.404710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.404898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.404923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.405101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.405143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.405311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.405337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.405498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.382 [2024-07-26 18:33:48.405523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.382 qpair failed and we were unable to recover it. 00:33:22.382 [2024-07-26 18:33:48.405703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.405731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.405915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.405940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.406128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.406153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.406295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.406320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.406484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.406510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.406696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.406721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.406907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.406935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.407153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.407179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.407377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.407403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.407562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.407591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.407799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.407824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.407988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.408016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.408213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.408239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.408444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.408468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.408627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.408652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.408865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.408893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.409083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.409108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.409265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.409290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.409466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.409494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.409642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.409670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.409850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.409875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.410083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.410112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.410264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.410292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.410470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.410494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.410676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.410704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.410892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.410918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.411102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.411144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.411282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.411307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.411520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.411548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.411701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.411725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.411898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.411926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.412069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.412097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.412262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.412287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.412504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.412532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.412682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.412710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.412862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.412887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.383 [2024-07-26 18:33:48.413086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.383 [2024-07-26 18:33:48.413116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.383 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.413285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.413314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.413498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.413527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.413707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.413735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.413907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.413935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.414122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.414147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.414293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.414319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.414508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.414533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.414730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.414755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.414908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.414936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.415114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.415142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.415316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.415341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.415514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.415542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.415719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.415747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.415950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.415975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.416169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.416195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.416357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.416382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.416542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.416566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.416779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.416807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.416958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.416987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.417133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.417159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.417315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.417360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.417552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.417578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.417713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.417738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.417905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.417948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.418122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.418150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.418339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.418364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.418548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.418576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.418752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.418777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.418957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.418985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.419203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.419229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.419366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.419391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.384 [2024-07-26 18:33:48.419548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.384 [2024-07-26 18:33:48.419573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.384 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.419752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.419781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.419951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.419979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.420164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.420189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.420359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.420384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.420592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.420620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.420778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.420803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.421014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.421042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.421203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.421231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.421395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.421420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.421614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.421639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.421824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.421856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.422014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.422039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.422185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.422210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.422350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.422374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.422563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.422588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.422860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.422912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.423124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.423153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.423314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.423339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.423546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.423574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.423779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.423807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.423986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.424011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.424155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.424197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.424356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.424384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.424573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.424599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.424785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.424814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.424967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.424995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.425174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.425199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.425368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.425393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.425549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.425573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.425764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.425789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.425981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.426009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.426196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.426221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.426386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.426411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.426588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.426616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.426794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.426819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.426980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.427005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.427159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.427188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.427364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.427395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.427581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.385 [2024-07-26 18:33:48.427607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.385 qpair failed and we were unable to recover it. 00:33:22.385 [2024-07-26 18:33:48.427873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.427923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.428080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.428109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.428271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.428296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.428481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.428509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.428661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.428689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.428851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.428876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.429018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.429043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.429248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.429276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.429451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.429476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.429641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.429666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.429822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.429848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.430005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.430030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.430175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.430201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.430337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.430361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.430524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.430548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.430728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.430756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.430964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.430989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.431175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.431201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.431388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.431417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.431600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.431628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.431787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.431812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.431950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.431976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.432164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.432190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.432351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.432376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.432544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.432569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.432731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.432756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.432949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.432977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.433144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.433170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.433379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.433406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.433567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.433592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.433731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.433771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.433938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.433964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.434127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.434153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.434295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.434320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.434524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.434552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.434734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.434758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.434913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.434940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.435098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.435126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.386 [2024-07-26 18:33:48.435303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.386 [2024-07-26 18:33:48.435328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.386 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.435505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.435534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.435743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.435770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.435949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.435974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.436161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.436190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.436368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.436396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.436546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.436572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.436741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.436767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.436937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.436965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.437127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.437152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.437313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.437339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.437525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.437553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.437708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.437732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.437873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.437899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.438035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.438083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.438248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.438273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.438441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.438485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.438675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.438703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.438873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.438901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.439083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.439127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.439288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.439313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.439516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.439541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.439683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.439708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.439866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.439891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.440074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.440102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.440261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.440286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.440433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.440458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.440592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.440617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.440799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.440827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.440984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.441009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.441148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.441174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.441308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.441333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.441491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.441519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.441669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.441695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.441870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.441898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.442102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.442130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.442306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.442334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.442517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.442542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.442693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.442721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.442872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.442900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.443081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.387 [2024-07-26 18:33:48.443109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.387 qpair failed and we were unable to recover it. 00:33:22.387 [2024-07-26 18:33:48.443264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.443289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.443481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.443510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.443651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.443679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.443868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.443896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.444051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.444081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.444248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.444273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.444407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.444432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.444587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.444612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.444743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.444767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.444974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.445002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.445158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.445186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.445336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.445365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.445549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.445575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.445728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.445756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.445903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.445931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.446123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.446149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.446312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.446337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.446481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.446507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.446638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.446661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.446861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.446886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.447046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.447085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.447267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.447296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.447502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.447530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.447707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.447734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.447887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.447912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.448066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.448095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.448265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.448290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.448496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.448524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.448705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.448734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.448929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.448957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.449128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.449153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.449319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.449344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.449480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.449505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.449670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.449696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.449830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.449872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.388 qpair failed and we were unable to recover it. 00:33:22.388 [2024-07-26 18:33:48.450016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.388 [2024-07-26 18:33:48.450044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.450200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.450225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.450365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.450407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.450558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.450586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.450788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.450816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.450965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.450990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.451154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.451197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.451355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.451384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.451642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.451695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.451884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.451909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.452047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.452077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.452267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.452295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.452446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.452474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.452654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.452679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.452851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.452880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.453077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.453103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.453265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.453308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.453467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.453492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.453653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.453682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.453833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.453861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.454073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.454107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.454263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.454288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.454479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.454505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.454672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.454700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.454899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.454924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.455080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.455106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.455249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.455275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.455424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.455465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.455618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.455646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.455811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.455836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.456017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.456043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.456253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.456279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.456461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.456489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.456649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.456674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.456848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.456883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.457029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.457057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.457221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.457245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.457376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.457401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.457585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.457613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.457754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.457780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.457967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.457995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.458188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.458214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.389 qpair failed and we were unable to recover it. 00:33:22.389 [2024-07-26 18:33:48.458383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.389 [2024-07-26 18:33:48.458412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.390 qpair failed and we were unable to recover it. 00:33:22.390 [2024-07-26 18:33:48.458550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.390 [2024-07-26 18:33:48.458578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.390 qpair failed and we were unable to recover it. 00:33:22.390 [2024-07-26 18:33:48.458765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.390 [2024-07-26 18:33:48.458790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.390 qpair failed and we were unable to recover it. 00:33:22.390 [2024-07-26 18:33:48.458929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.390 [2024-07-26 18:33:48.458956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.390 qpair failed and we were unable to recover it. 00:33:22.390 [2024-07-26 18:33:48.459166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.390 [2024-07-26 18:33:48.459194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.390 qpair failed and we were unable to recover it. 00:33:22.390 [2024-07-26 18:33:48.459373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.390 [2024-07-26 18:33:48.459398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.390 qpair failed and we were unable to recover it. 00:33:22.390 [2024-07-26 18:33:48.459546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.390 [2024-07-26 18:33:48.459571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.390 qpair failed and we were unable to recover it. 00:33:22.390 [2024-07-26 18:33:48.459759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.390 [2024-07-26 18:33:48.459785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.390 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.460001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.460030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.460194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.460220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.460389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.460414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.460558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.460583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.460781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.460811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.460992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.461020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.461190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.461219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.461398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.461423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.461610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.461638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.461797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.461826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.462007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.462035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.462235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.462264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.462424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.462450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.462611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.462654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.462793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.462821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.463017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.463042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.463182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.463225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.463408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.463436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.463632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.463657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.463827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.463852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.464008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.464036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.464200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.464227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.464386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.464430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.464590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.464615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.464748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.464774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.672 qpair failed and we were unable to recover it. 00:33:22.672 [2024-07-26 18:33:48.464947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.672 [2024-07-26 18:33:48.464975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.465165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.465192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.465335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.465360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.465523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.465551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.465697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.465725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.465876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.465904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.466088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.466113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.466280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.466306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.466444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.466486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.466672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.466700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.466885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.466910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.467083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.467109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.467241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.467283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.467490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.467518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.467698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.467724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.467869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.467896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.468055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.468108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.468254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.468282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.468485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.468509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.468683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.468709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.468881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.468918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.469103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.469132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.469316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.469341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.469487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.469513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.469648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.469673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.469840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.469866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.470000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.470025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.470171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.470201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.470351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.470377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.470544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.673 [2024-07-26 18:33:48.470571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.673 qpair failed and we were unable to recover it. 00:33:22.673 [2024-07-26 18:33:48.470719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.470745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.470875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.470916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.471114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.471140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.471305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.471330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.471518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.471543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.471679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.471704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.471845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.471870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.472064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.472092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.472253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.472278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.472407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.472449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.472597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.472624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.472811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.472839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.473022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.473047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.473215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.473243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.473384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.473412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.473579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.473604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.473763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.473788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.473998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.474026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.474195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.474220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.474361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.474386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.474524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.474549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.474713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.474738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.474890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.474919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.475082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.475111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.475271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.475300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.475463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.475508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.475660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.674 [2024-07-26 18:33:48.475688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.674 qpair failed and we were unable to recover it. 00:33:22.674 [2024-07-26 18:33:48.475843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.475871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.476063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.476088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.476236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.476262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.476398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.476423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.476583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.476624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.476784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.476809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.477021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.477057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.477223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.477251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.477440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.477467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.477626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.477651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.477783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.477829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.478005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.478033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.478214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.478242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.478403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.478428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.478608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.478637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.478846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.478874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.479071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.479096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.479232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.479257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.479390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.479415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.479554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.479578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.479787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.479812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.479943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.479968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.480139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.480165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.480304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.480329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.480517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.480544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.480758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.480784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.480975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.481004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.481223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.481248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.481400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.481430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.481635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.481660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.481851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.481889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.482044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.482077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.482246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.482270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.675 [2024-07-26 18:33:48.482422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.675 [2024-07-26 18:33:48.482447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.675 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.482632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.482660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.482849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.482874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.483044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.483108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.483273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.483298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.483474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.483503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.483637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.483678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.483832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.483860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.484043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.484088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.484231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.484256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.484404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.484452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.484615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.484640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.484763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.484788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.484975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.485003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.485174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.485200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.485339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.485383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.485566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.485590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.485724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.485768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.485949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.485977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.486149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.486178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.486338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.486363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.486509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.486534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.486674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.486699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.486876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.486904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.487088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.487114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.487295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.487324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.487521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.487550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.487698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.487727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.487920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.487945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.488109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.488138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.488317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.488345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.488525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.488552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.488740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.488769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.676 [2024-07-26 18:33:48.488910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.676 [2024-07-26 18:33:48.488936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.676 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.489100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.489126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.489294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.489322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.489515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.489541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.489690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.489728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.489922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.489949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.490097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.490126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.490284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.490310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.490458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.490498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.490660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.490687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.490824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.490849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.491022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.491047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.491214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.491242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.491419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.491455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.491588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.491629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.491811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.491837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.492027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.492072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.492218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.492245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.492407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.492432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.492605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.492630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.492787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.492812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.493002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.493027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.493221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.493250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.493433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.493459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.493646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.493675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.493830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.493857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.494012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.494051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.494235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.677 [2024-07-26 18:33:48.494260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.677 qpair failed and we were unable to recover it. 00:33:22.677 [2024-07-26 18:33:48.494422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.494451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.494637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.494664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.494815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.494843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.495002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.495027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.495239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.495268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.495422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.495450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.495617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.495642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.495778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.495803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.495935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.495961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.496127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.496171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.496360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.496385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.496521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.496546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.496729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.496765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.496929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.496957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.497123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.497152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.497317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.497342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.497509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.497552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.497741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.497766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.497905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.497930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.498095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.498120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.498263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.498305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.498487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.498515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.498688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.498716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.498907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.498934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.499150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.499179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.499322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.499350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.499537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.499565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.499719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.499745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.499955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.499983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.500148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.500177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.500325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.500353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.500522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.500548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.500713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.500757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.678 qpair failed and we were unable to recover it. 00:33:22.678 [2024-07-26 18:33:48.500934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.678 [2024-07-26 18:33:48.500962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.501182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.501208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.501344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.501369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.501550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.501575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.504183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.504212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.504381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.504409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.504572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.504597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.504748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.504773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.504942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.504967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.505143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.505169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.505334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.505359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.505544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.505573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.505726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.505754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.505906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.505934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.506152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.506177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.506359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.506387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.506567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.506597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.506751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.506778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.506959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.506984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.507161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.507188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.507373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.507402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.507577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.507602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.507748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.507774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.507962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.507987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.508137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.508163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.508321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.508347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.508483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.508508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.508649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.508674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.508816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.508841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.508971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.508996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.509171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.509196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.509338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.509363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.509498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.509525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.509686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.509711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.679 qpair failed and we were unable to recover it. 00:33:22.679 [2024-07-26 18:33:48.509867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.679 [2024-07-26 18:33:48.509894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.510056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.510088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.510252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.510278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.510416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.510441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.510587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.510613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.510802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.510828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.510951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.510977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.511118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.511143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.511282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.511307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.511494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.511519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.511665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.511690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.511876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.511901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.512055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.512086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.512251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.512281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.512444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.512470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.512635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.512660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.512831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.512856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.512997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.513022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.513165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.513191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.513366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.513392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.513545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.513571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.513767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.513793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.514006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.514033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.514198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.514226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.514388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.514414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.514584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.514609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.514774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.514800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.515016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.515044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.515231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.515256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.515422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.515448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.515587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.515612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.515771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.515796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.515944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.680 [2024-07-26 18:33:48.515969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.680 qpair failed and we were unable to recover it. 00:33:22.680 [2024-07-26 18:33:48.516102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.516128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.516262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.516287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.516460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.516486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.516677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.516703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.516835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.516860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.516992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.517017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.517196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.517221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.517385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.517410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.517554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.517580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.517713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.517738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.517865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.517890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.518057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.518088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.518251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.518277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.518466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.518492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.518669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.518694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.518876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.518901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.519082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.519108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.519247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.519273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.519436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.519462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.519641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.519667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.519837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.519862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.520025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.520054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.520240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.520266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.520426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.520451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.520642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.520668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.520831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.520856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.521072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.521115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.521258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.521284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.521457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.521482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.521667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.521692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.521857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.521882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.522020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.522045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.522186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.681 [2024-07-26 18:33:48.522212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.681 qpair failed and we were unable to recover it. 00:33:22.681 [2024-07-26 18:33:48.522355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.522381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.522550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.522575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.522768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.522796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.522976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.523002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.523220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.523250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.523425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.523453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.523651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.523699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.523907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.523932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.524113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.524142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.524301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.524329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.524490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.524518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.524717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.524743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.524920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.524949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.525117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.525143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.525305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.525346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.525511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.525539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.525754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.525804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.525959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.525987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.526151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.526180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.526368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.526393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.526601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.526647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.526827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.526855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.527034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.527085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.527295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.527320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.527500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.527528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.527682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.527711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.527919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.527947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.528139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.528166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.528315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.528340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.528555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.528583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.528816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.682 [2024-07-26 18:33:48.528841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.682 qpair failed and we were unable to recover it. 00:33:22.682 [2024-07-26 18:33:48.529055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.529090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.529279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.529305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.529492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.529528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.529740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.529787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.529993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.530018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.530201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.530230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.530418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.530443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.530582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.530607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.530747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.530773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.530962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.530987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.531136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.531179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.531368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.531394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.531559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.531585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.531776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.531804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.532013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.532038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.532186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.532211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.532400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.532425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.532604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.532630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.532827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.532853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.533074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.533102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.533287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.533314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.533543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.533571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.533754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.533782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.533988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.534017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.534212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.534238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.534424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.534456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.683 [2024-07-26 18:33:48.534669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.683 [2024-07-26 18:33:48.534695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.683 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.534861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.534886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.535063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.535088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.535275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.535303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.535480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.535507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.535661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.535689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.535848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.535873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.536043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.536086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.536282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.536310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.536515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.536541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.536682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.536715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.536886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.536912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.537076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.537101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.537249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.537274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.537407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.537433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.537623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.537650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.537824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.537852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.538027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.538052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.538195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.538219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.538386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.538412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.538590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.538618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.538874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.538902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.539084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.539110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.539244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.539269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.539489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.539518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.539697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.539725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.539910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.539945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.540142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.540168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.540418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.540446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.540615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.540643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.540792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.540820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.684 qpair failed and we were unable to recover it. 00:33:22.684 [2024-07-26 18:33:48.540994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.684 [2024-07-26 18:33:48.541022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.541181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.541206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.541356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.541384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.541559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.541588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.541769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.541797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.541980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.542008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.542201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.542226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.542431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.542459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.542666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.542692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.542863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.542891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.543077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.543120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.543286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.543312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.543517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.543541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.543742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.543770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.543933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.543958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.544148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.544174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.544301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.544343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.544532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.544560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.544705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.544733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.544913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.544941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.545110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.545135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.545270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.545297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.545517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.545546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.545752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.545780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.545998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.546027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.546221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.546247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.546411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.546454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.546606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.546643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.546816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.546845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.547027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.547075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.547254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.547279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.547436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.547464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.547664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.685 [2024-07-26 18:33:48.547692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.685 qpair failed and we were unable to recover it. 00:33:22.685 [2024-07-26 18:33:48.547843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.547871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.548011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.548038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.548189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.548215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.548379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.548408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.548568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.548596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.548780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.548808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.548978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.549006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.549163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.549189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.549353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.549379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.549590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.549619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.549800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.549829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.550011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.550039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.550231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.550257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.550469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.550498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.550671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.550699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.550939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.550967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.551170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.551196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.551404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.551433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.551687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.551737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.551916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.551945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.552145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.552172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.552379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.552407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.552552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.552581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.552775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.552820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.553007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.553033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.553206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.553231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.553440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.553468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.553624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.553665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.553923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.686 [2024-07-26 18:33:48.553952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.686 qpair failed and we were unable to recover it. 00:33:22.686 [2024-07-26 18:33:48.554143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.554169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.554308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.554338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.554559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.554588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.554773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.554802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.554958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.554987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.555169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.555195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.555376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.555404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.555617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.555645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.555851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.555879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.556026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.556051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.556256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.556285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.556543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.556572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.556749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.556777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.556936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.556961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.557152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.557179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.557337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.557376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.557560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.557589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.557766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.557795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.557972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.558001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.558218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.558245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.558398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.558425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.558599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.558626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.558831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.558859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.559044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.559102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.559234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.559259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.559465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.687 [2024-07-26 18:33:48.559493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.687 qpair failed and we were unable to recover it. 00:33:22.687 [2024-07-26 18:33:48.559686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.559714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.559897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.559925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.560113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.560138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.560283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.560308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.560474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.560502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.560679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.560708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.560918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.560946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.561105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.561132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.561319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.561345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.561573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.561624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.561780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.561807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.561981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.562009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.562198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.562224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.562386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.562414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.562615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.562643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.562820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.562848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.563003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.563039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.563198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.563224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.563407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.563437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.563588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.563616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.563794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.563833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.564023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.564052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.564233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.564259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.564442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.564470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.564644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.564672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.564856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.564884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.565128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.565154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.565289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.565314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.565516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.565544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.688 qpair failed and we were unable to recover it. 00:33:22.688 [2024-07-26 18:33:48.565726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.688 [2024-07-26 18:33:48.565754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.565927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.565955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.566106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.566132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.566288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.566314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.566506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.566542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.566734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.566763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.566915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.566940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.567191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.567217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.567399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.567428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.567613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.567641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.567845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.567873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.568037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.568076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.568205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.568231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.568395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.568420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.568600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.568628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.568843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.568870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.569029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.569067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.569225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.569250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.569415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.569448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.569657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.569685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.569864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.569892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.570086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.570113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.570272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.570297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.570506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.570533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.570732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.570760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.570931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.570960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.571140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.571166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.571322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.571347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.571539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.689 [2024-07-26 18:33:48.571567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.689 qpair failed and we were unable to recover it. 00:33:22.689 [2024-07-26 18:33:48.571764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.571793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.571944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.571977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.572180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.572205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.572426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.572454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.572667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.572693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.572899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.572928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.573191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.573216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.573349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.573385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.573550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.573576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.573758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.573786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.573970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.573995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.574160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.574189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.574333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.574361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.574582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.574609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.574765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.574790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.574958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.574984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.575165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.575194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.575346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.575373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.575555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.575580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.575787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.575816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.575976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.576001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.576185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.576213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.576393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.576418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.576569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.576597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.576794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.576819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.576991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.577019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.577211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.577240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.577383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.577409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.577614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.577641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.577832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.577860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.578046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.578078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.578249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.578275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.578475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.578503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.578720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.690 [2024-07-26 18:33:48.578748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.690 qpair failed and we were unable to recover it. 00:33:22.690 [2024-07-26 18:33:48.578958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.578983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.579151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.579178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.579352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.579380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.579581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.579609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.579771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.579797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.579937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.579962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.580169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.580198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.580380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.580405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.580604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.580630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.580817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.580845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.581004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.581032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.581225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.581251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.581409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.581434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.581600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.581625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.581784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.581809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.582017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.582045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.582220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.582246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.582388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.582413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.582568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.582594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.582729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.582754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.582903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.582929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.583107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.583137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.583314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.583342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.583523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.583551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.583710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.583736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.583947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.583975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.584156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.584182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.584337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.584362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.584534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.584560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.584741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.584774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.584923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.584951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.585138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.585164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.585352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.585386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.585590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.691 [2024-07-26 18:33:48.585619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.691 qpair failed and we were unable to recover it. 00:33:22.691 [2024-07-26 18:33:48.585812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.585837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.586021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.586049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.586209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.586234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.586441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.586469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.586656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.586681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.586840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.586883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.587075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.587108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.587292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.587320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.587462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.587490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.587631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.587659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.587868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.587902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.588082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.588109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.588291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.588320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.588514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.588543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.588725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.588750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.588930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.588959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.589113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.589139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.589301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.589326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.589493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.589517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.589684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.589710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.589877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.589902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.590082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.590111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.590269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.590294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.590464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.590490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.590657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.590699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.590891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.590916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.591063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.591092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.591274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.591303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.591470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.591495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.591658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.591683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.591848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.591873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.592031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.592066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.592263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.592288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.592462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.692 [2024-07-26 18:33:48.592489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.692 qpair failed and we were unable to recover it. 00:33:22.692 [2024-07-26 18:33:48.592678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.592703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.592867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.592894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.593085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.593114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.593285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.593313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.593526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.593551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.593759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.593787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.593958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.593987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.594184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.594213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.594388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.594413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.594557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.594585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.594760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.594788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.594964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.594992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.595169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.595195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.595359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.595385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.595568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.595596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.595774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.595801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.595987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.596012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.596205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.596234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.596441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.596469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.596641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.596669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.596861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.596886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.597015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.597040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.597203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.597228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.597360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.597401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.597560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.597585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.597725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.597768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.597918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.597946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.598088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.598132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.598297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.598322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.598462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.598487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.598646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.598688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.598845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.598872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.599051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.599082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.599274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.693 [2024-07-26 18:33:48.599306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.693 qpair failed and we were unable to recover it. 00:33:22.693 [2024-07-26 18:33:48.599489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.599518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.599718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.599746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.599903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.599928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.600086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.600129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.600310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.600338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.600486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.600513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.600693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.600718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.600896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.600924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.601121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.601146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.601350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.601378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.601570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.601595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.601781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.601810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.602066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.602095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.602276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.602304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.602492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.602517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.602676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.602736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.602915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.602944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.603123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.603152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.603334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.603359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.603550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.603601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.603771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.603800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.604002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.604029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.604184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.604209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.604351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.604394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.604577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.604604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.604808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.604836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.604995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.605024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.605170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.605196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.694 [2024-07-26 18:33:48.605338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.694 [2024-07-26 18:33:48.605363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.694 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.605531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.605557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.605696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.605721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.605861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.605886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.606070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.606098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.606279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.606307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.606476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.606502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.606705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.606733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.606886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.606914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.607089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.607119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.607314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.607341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.607491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.607518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.607662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.607688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.607851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.607893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.608096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.608138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.608284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.608310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.608564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.608592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.608763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.608791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.609032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.609057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.609265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.609294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.609477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.609503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.609700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.609725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.609890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.609915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.610102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.610131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.610287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.610315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.610498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.610525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.610736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.610761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.610949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.610977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.611125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.611154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.611358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.611426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.611632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.611657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.611817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.611845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.612025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.612052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.612243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.612268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.695 qpair failed and we were unable to recover it. 00:33:22.695 [2024-07-26 18:33:48.612432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.695 [2024-07-26 18:33:48.612457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.612763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.612820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.613006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.613034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.613226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.613251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.613424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.613449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.613731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.613785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.613958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.613986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.614144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.614172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.614358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.614383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.614641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.614690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.614870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.614899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.615055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.615090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.615268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.615293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.615530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.615582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.615769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.615798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.616000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.616028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.616194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.616220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.616353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.616397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.616601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.616629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.616920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.616979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.617156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.617181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.617362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.617390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.617537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.617565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.617713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.617741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.617928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.617954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.618168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.618197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.618396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.618421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.618582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.618608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.618762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.618787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.618952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.618994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.619192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.619218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.619403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.619431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.619584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.619613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.619797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.619826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.696 qpair failed and we were unable to recover it. 00:33:22.696 [2024-07-26 18:33:48.619998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.696 [2024-07-26 18:33:48.620026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.620189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.620218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.620399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.620425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.620609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.620637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.620888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.620916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.621127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.621157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.621333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.621359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.621522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.621547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.621709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.621750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.621902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.621931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.622150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.622176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.622388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.622415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.622602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.622631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.622790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.622818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.623068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.623094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.623277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.623305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.623512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.623540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.623716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.623744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.623904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.623929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.624137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.624166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.624348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.624373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.624549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.624576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.624756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.624781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.624922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.624947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.625199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.625228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.625422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.625447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.625615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.625640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.625852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.625880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.626025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.626052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.626242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.626270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.626492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.626517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.626703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.626731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.626944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.626969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.627182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.697 [2024-07-26 18:33:48.627210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.697 qpair failed and we were unable to recover it. 00:33:22.697 [2024-07-26 18:33:48.627421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.627446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.627636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.627665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.627853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.627879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.628055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.628099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.628282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.628307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.628497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.628530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.628773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.628798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.628978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.629006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.629169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.629194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.629358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.629383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.629533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.629561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.629744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.629769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.629991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.630019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.630187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.630213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.630350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.630375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.630551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.630579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.630831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.630856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.631053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.631088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.631241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.631269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.631479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.631504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.631666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.631691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.631875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.631903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.632107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.632136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.632303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.632328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.632469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.632495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.632701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.632730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.632920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.632948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.633191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.633217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.633456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.633482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.633675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.633703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.633880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.633909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.634082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.698 [2024-07-26 18:33:48.634110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.698 qpair failed and we were unable to recover it. 00:33:22.698 [2024-07-26 18:33:48.634260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.634289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.634513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.634542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.634692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.634721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.634923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.634951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.635104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.635130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.635290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.635332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.635518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.635543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.635674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.635713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.635898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.635922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.636144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.636173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.636345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.636373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.636616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.636641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.636774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.636799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.636966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.637008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.637190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.637216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.637383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.637425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.637588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.637613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.637781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.637810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.637972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.638000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.638214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.638239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.638382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.638407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.638571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.638613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.638814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.638841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.639016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.639044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.639250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.639276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.639419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.639445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.639617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.639644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.639938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.699 [2024-07-26 18:33:48.639998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.699 qpair failed and we were unable to recover it. 00:33:22.699 [2024-07-26 18:33:48.640217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.640243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.640493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.640538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.640740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.640768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.640948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.640976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.641168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.641194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.641390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.641418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.641621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.641649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.641813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.641838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.641999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.642024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.642169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.642195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.642329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.642354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.642556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.642584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.642770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.642794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.642982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.643015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.643275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.643301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.643487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.643514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.643758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.643783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.643949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.643977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.644182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.644211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.644374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.644400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.644536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.644561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.644703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.644728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.644890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.644932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.645114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.645143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.645287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.645312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.645495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.645523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.645726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.645754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.645959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.645987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.646148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.646174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.646337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.646381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.700 qpair failed and we were unable to recover it. 00:33:22.700 [2024-07-26 18:33:48.646552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.700 [2024-07-26 18:33:48.646580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.646732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.646759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.646942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.646968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.647113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.647142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.647283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.647310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.647516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.647541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.647704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.647729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.647899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.647925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.648085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.648111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.648320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.648348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.648504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.648529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.648699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.648740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.648876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.648904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.649081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.649109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.649321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.649346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.649547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.649575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.649779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.649807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.649981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.650009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.650185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.650211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.650344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.650370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.650529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.650558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.650724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.650752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.650945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.650970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.651142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.651168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.651337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.651363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.651574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.651602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.651780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.651805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.651970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.651995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.652174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.652204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.652382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.652410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.652564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.652589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.652752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.652793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.701 [2024-07-26 18:33:48.652959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.701 [2024-07-26 18:33:48.652984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.701 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.653145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.653186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.653344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.653369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.653529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.653571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.653776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.653804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.653987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.654012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.654170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.654196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.654361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.654403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.654620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.654646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.654787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.654812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.654970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.654996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.655206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.655235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.655413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.655441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.655691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.655719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.655934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.655959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.656098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.656124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.656317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.656342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.656538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.656564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.656722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.656747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.656996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.657029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.657213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.657239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.657379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.657404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.657588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.657613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.657801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.657829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.658000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.658028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.658209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.658238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.658450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.658475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.658770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.658827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.659008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.659036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.659201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.659232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.659417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.702 [2024-07-26 18:33:48.659443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.702 qpair failed and we were unable to recover it. 00:33:22.702 [2024-07-26 18:33:48.659729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.659780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.659986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.660013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.660216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.660242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.660402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.660427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.660765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.660827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.661011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.661036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.661208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.661233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.661361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.661386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.661577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.661636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.661813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.661841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.662022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.662050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.662211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.662237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.662378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.662404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.662607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.662635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.662914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.662975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.663158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.663184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.663335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.663361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.663522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.663547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.663725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.663753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.663941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.663966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.664152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.664181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.664368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.664396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.664581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.664608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.664784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.664809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.665015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.665044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.665237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.665265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.665436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.665464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.665640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.665665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.665811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.665839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.666018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.666050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.666209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.666237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.666420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.666445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.666623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.666651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.703 [2024-07-26 18:33:48.666809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.703 [2024-07-26 18:33:48.666835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.703 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.667039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.667073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.667256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.667282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.667493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.667521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.667660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.667687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.667868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.667896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.668105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.668131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.668296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.668324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.668499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.668526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.668714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.668741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.668922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.668948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.669154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.669182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.669360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.669388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.669605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.669630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.669811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.669836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.670041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.670073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.670278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.670305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.670480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.670507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.670692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.670717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.670888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.670914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.671078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.671104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.671284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.671312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.671491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.671517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.671679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.671726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.671881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.671909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.672102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.672129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.672293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.672319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.672537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.672565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.672748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.672773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.672933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.672958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.673135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.673160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.673384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.673409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.673546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.704 [2024-07-26 18:33:48.673587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.704 qpair failed and we were unable to recover it. 00:33:22.704 [2024-07-26 18:33:48.673740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.673767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.673943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.673968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.674143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.674172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.674348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.674376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.674554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.674581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.674738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.674763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.674926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.674951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.675088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.675114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.675273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.675298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.675428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.675454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.675588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.675629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.675823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.675848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.676013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.676038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.676240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.676265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.676422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.676450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.676618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.676645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.676815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.676843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.677028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.677053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.677274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.677303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.677512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.677538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.677741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.677769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.677921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.677946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.678123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.678152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.705 [2024-07-26 18:33:48.678326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.705 [2024-07-26 18:33:48.678354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.705 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.678528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.678556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.678716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.678741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.678901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.678926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.679127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.679152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.679289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.679314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.679472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.679497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.679707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.679734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.679964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.679995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.680207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.680233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.680371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.680396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.680604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.680632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.680783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.680810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.681014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.681042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.681208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.681233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.681396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.681437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.681611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.681639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.681807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.681834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.681987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.682012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.682196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.682225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.682403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.682430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.682630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.682657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.682844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.682869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.683064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.683090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.683224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.683249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.683386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.683410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.683574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.683599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.683762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.683790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.683931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.683958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.684148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.684173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.706 [2024-07-26 18:33:48.684349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.706 [2024-07-26 18:33:48.684374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.706 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.684525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.684553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.684735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.684760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.684899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.684924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.685048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.685078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.685220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.685249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.685412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.685437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.685621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.685649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.685805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.685831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.686017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.686045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.686224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.686253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.686405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.686434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.686598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.686623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.686760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.686785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.686963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.686991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.687168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.687196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.687357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.687382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.687527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.687552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.687718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.687743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.687903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.687931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.688125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.688151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.688291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.688316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.688520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.688545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.688685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.688710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.688888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.688913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.689091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.689116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.689294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.689320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.689494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.689537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.689692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.689717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.689883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.689925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.690073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.690101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.690276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.690304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.690487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.690512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.707 [2024-07-26 18:33:48.690703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.707 [2024-07-26 18:33:48.690731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.707 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.690911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.690939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.691085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.691122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.691309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.691335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.691551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.691579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.691761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.691788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.691970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.691995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.692171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.692197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.692387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.692416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.692598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.692626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.692781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.692806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.692991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.693015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.693207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.693235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.693406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.693436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.693582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.693624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.693783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.693808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.694015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.694043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.694196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.694224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.694381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.694409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.694574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.694598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.694766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.694792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.694972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.695000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.695157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.695185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.695364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.695390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.695545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.695574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.708 [2024-07-26 18:33:48.695735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.708 [2024-07-26 18:33:48.695763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.708 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.695947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.695972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.696120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.696146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.696325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.696368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.696560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.696588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.696745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.696772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.696980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.697006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.697165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.697194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.697375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.697403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.697584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.697609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.697747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.697773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.697931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.697959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.698117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.698143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.698278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.698303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.698497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.698522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.698699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.698732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.698923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.698947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.699112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.699138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.699271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.699296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.699489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.699513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.699733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.699758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.699942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.699970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.700151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.700177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.700310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.700352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.700547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.700572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.700708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.709 [2024-07-26 18:33:48.700733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.709 qpair failed and we were unable to recover it. 00:33:22.709 [2024-07-26 18:33:48.700906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.700931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.701127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.701156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.701378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.701403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.701583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.701611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.701774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.701799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.701936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.701961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.702098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.702124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.702313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.702341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.702506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.702531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.702697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.702722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.702907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.702935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.703122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.703147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.703272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.703297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.703439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.703464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.703655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.703683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.703854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.703881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.704094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.704142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.704298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.704323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.704457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.704482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.704640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.704681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.704857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.704882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.705069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.705098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.705272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.705300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.705466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.710 [2024-07-26 18:33:48.705494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.710 qpair failed and we were unable to recover it. 00:33:22.710 [2024-07-26 18:33:48.705697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.705722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.705886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.705914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.706083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.706109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.706249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.706275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.706453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.706478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.706637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.706663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.706796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.706825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.706990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.707016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.707199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.707225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.707390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.707418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.707608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.707635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.707797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.707822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.707983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.708008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.708178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.708207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.708379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.708407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.708585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.708613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.708790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.708815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.711 [2024-07-26 18:33:48.708945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.711 [2024-07-26 18:33:48.708988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.711 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.709156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.709185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.709368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.709396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.709564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.709589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.709774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.709801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.709980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.710008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.710184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.710212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.710393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.710418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.710639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.710667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.710877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.710905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.711056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.711088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.711272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.711297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.711461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.711503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.711678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.711706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.711841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.711869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.712027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.712052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.712257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.712285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.712436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.712464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.712652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.712680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.712861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.712886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.713071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.713099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.713305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.713330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.713469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.713494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.713683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.713708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.713875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.712 [2024-07-26 18:33:48.713903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.712 qpair failed and we were unable to recover it. 00:33:22.712 [2024-07-26 18:33:48.714084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.714126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.714259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.714284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.714437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.714462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.714616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.714641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.714781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.714806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.714985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.715013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.715179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.715204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.715379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.715408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.715576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.715604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.715770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.715796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.715940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.715965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.716128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.716154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.716363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.716391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.716539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.716566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.716713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.716738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.716908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.716934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.717070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.717096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.717271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.717299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.717447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.717472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.717645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.717670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.717810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.717836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.717970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.717995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.718152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.713 [2024-07-26 18:33:48.718178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.713 qpair failed and we were unable to recover it. 00:33:22.713 [2024-07-26 18:33:48.718336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.718364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.718572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.718597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.718777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.718804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.718961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.718986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.719191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.719220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.719401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.719427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.719570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.719612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.719777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.719802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.720025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.720055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.720220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.720250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.720409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.720434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.720593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.720618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.720755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.720780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.720915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.720940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.721147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.721175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.721347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.721372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.721584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.721612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.721791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.721819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.721963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.721991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.722151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.722177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.722344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.722386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.722528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.722556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.722736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.722764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.722957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.714 [2024-07-26 18:33:48.722982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.714 qpair failed and we were unable to recover it. 00:33:22.714 [2024-07-26 18:33:48.723166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.723195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.723386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.723411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.723590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.723617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.723784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.723809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.723984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.724028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.724202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.724228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.724360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.724386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.724548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.724573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.724733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.724758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.724917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.724943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.725113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.725141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.725320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.725345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.725497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.725525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.725681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.725710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.725863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.725890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.726050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.726081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.726266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.726294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.726497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.726525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.726695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.726722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.726883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.726908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.727033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.727093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.727275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.727303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.727472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.727500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.727689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.727713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.715 [2024-07-26 18:33:48.727846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.715 [2024-07-26 18:33:48.727872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.715 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.728082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.728112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.728294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.728322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.728510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.728537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.728749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.728778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.728971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.728999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.729171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.729197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.729327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.729352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.729564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.729592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.729742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.729770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.729920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.729948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.730116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.730142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.730274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.730318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.730542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.730567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.730700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.730724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.730863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.730888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.731045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.731078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.731226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.731254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.731457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.731485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.731647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.731672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.731813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.731838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.731974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.731999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.732190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.732219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.732379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.732404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.732571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.732597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.716 qpair failed and we were unable to recover it. 00:33:22.716 [2024-07-26 18:33:48.732730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.716 [2024-07-26 18:33:48.732755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.732904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.732932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.733129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.733155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.733317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.733343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.733505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.733537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.733707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.733735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.733886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.733911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.734054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.734101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.734282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.734310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.734451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.734479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.734641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.734666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.734804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.734847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.735019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.735047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.735268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.735293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.735453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.735479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.735658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.735686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.735873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.735901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.736072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.736100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.736291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.736316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.736472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.736501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.736671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.736699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.736879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.736906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.737096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.737122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.737252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.737278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.737459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.737487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.737690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.717 [2024-07-26 18:33:48.737718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.717 qpair failed and we were unable to recover it. 00:33:22.717 [2024-07-26 18:33:48.737883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.737908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.738086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.738113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.738252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.738294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.738445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.738472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.738658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.738683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.738878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.738906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.739091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.739132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.739272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.739297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.739432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.739457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.739613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.739641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.739805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.739830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.739967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.739992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.740146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.740172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.740384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.740412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.740564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.740592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.740803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.740831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.740991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.741015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.741182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.741208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.741356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.741385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.718 [2024-07-26 18:33:48.741596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.718 [2024-07-26 18:33:48.741624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.718 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.741800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.741825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.741978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.742006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.742156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.742184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.742383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.742408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.742549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.742574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.742761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.742789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.742927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.742955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.743123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.743151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.743302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.743327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.743506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.743535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.743704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.743732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.743934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.743962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.744115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.744141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.744328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.744357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.744522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.744550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.744724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.744751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.744921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.744946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.745129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.745158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.745324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.745349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.745481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.745506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.745697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.745722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.745892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.745920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.746107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.746132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.746266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.746307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.746497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.746522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.746728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.746757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.719 [2024-07-26 18:33:48.746970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.719 [2024-07-26 18:33:48.747002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.719 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.747165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.747193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.747349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.747374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.747509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.747535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.747662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.747687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.747892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.747935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.748195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.748221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.748431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.748460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.748647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.748672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.748803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.748845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.749056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.749087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.749223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.749249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.749391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.749416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.749558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.749601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.749792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.749817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.750001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.750028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.750191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.750217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.750355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.750399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.750557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.750582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.750763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.750792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.750960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.750986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.751180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.751208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.751379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.751406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.751589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.751618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.751805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.751829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.751987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.752012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.752149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.752175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.752341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.752366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.720 qpair failed and we were unable to recover it. 00:33:22.720 [2024-07-26 18:33:48.752501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.720 [2024-07-26 18:33:48.752527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.752725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.752753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.752909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.752934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.753080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.753106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.753292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.753321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.753506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.753535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.753693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.753718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.753856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.753881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.754047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.754077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.754217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.754242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.754404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.754429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.754613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.754641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.754833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.754858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.755017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.755045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.755215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.755240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.755394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.755419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.755625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.755653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.755801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.755829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.755989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.756014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.756159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.756184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.756344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.756369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.756561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.756589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.756774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.756799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.756982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.757011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.757179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.757204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.757336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.757377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.757584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.757609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.757778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.757804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.721 [2024-07-26 18:33:48.757966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.721 [2024-07-26 18:33:48.757991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.721 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.758216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.758241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.758403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.758428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.758559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.758584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.758756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.758783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.758929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.758956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.759103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.759129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.759264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.759305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.759485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.759513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.759660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.759688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.759840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.759866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.759996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.760036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.760203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.760233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.760391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.760417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.760555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.760580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.760717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.760742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.760924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.760950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.761126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.761151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.761296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.761321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.761480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.761522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.761694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.761722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.761899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.761927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.762074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.762100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.762239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.762281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.762487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.762515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.762694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.762722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.762879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.762905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.763097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.763126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.722 qpair failed and we were unable to recover it. 00:33:22.722 [2024-07-26 18:33:48.763330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.722 [2024-07-26 18:33:48.763356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.763563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.763590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.763738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.763763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.763938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.763966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.764106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.764135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.764286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.764314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.764532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.764557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.764739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.764768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.764973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.765001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.765207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.765236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.765415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.765440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.765617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.765645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.765829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.765857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.766010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.766038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.766228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.766254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.766472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.766500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.766694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.766719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.766892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.766919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.767122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.767148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.767299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.767327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.767469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.767496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.767699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.767727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.767905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.767930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.768114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.768143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.768316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.768344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.768514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.768546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.768720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.768745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.768914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.768940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.723 qpair failed and we were unable to recover it. 00:33:22.723 [2024-07-26 18:33:48.769100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.723 [2024-07-26 18:33:48.769126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.769267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.769292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.769453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.769478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.769641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.769666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.769845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.769873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.770010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.770038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.770198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.770223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.770389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.770415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.770570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.770595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.770811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.770839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.771020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.771045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.771215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.771243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.771421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.771446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.771606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.771631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.771767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.771792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.771970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.771998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.772172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.772201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.772353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.772381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.772539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.772564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.772727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.772768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.772944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.772972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.773152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.773181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.773372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.773397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.773581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.773609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.773789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.773818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.773979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.774004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.774178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.774203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.774364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.774393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.774595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.724 [2024-07-26 18:33:48.774620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.724 qpair failed and we were unable to recover it. 00:33:22.724 [2024-07-26 18:33:48.774802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.774830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.775018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.775043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.775240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.775270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.775460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.775485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.775621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.775647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.775809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.775835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.776018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.776046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.776256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.776281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.776462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.776490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.776711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.776735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.776954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.776979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.777142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.777167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.777298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.777323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.777478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.777503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.777683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.777712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.777912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.777939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.778120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.778149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.778313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.778338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.778481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.778522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.778701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.778729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.778929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.778957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.779165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.779190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.779372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.779401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.779598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.725 [2024-07-26 18:33:48.779623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.725 qpair failed and we were unable to recover it. 00:33:22.725 [2024-07-26 18:33:48.779766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.779808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.780019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.780044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.780242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.780270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.780432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.780457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.780660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.780688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.780870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.780894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.781033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.781064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.781197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.781222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.781377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.781419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.781575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.781600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.781762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.781805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.781984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.782012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.782232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.782264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.782440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.782465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.782751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.782814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.782995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.783020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.783214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.783242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.783423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.783449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.783619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.783645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.783801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.783825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.783983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.784011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.784164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.784190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.784399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.784466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.784690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.784715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.784893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.784920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.785087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.785113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.785264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.785290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.785434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.785459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.785618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.785643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.785827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.785853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.726 qpair failed and we were unable to recover it. 00:33:22.726 [2024-07-26 18:33:48.786006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.726 [2024-07-26 18:33:48.786033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.786198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.786223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.786403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.786431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.786609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.786634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.786797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.786839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.787022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.787050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.787261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.787286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.787444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.787469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.787652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.787680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.787866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.787891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.788103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.788132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.788320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.788345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.788495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.788521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.788677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.788720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.788908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.788933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.789102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.789129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.789334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.789363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.789505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.789534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.789685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.789713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.789922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.789951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.790149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.790175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.790351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.790379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.790583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.790610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.790779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.790805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.790967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.790996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.791164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.791189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.791363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.791388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.791523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.791547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.791731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.791760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.791934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.791962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.792131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.792159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.792305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.792330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.792540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.727 [2024-07-26 18:33:48.792568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.727 qpair failed and we were unable to recover it. 00:33:22.727 [2024-07-26 18:33:48.792744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.728 [2024-07-26 18:33:48.792772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.728 qpair failed and we were unable to recover it. 00:33:22.728 [2024-07-26 18:33:48.792925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.728 [2024-07-26 18:33:48.792953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:22.728 qpair failed and we were unable to recover it. 00:33:23.013 [2024-07-26 18:33:48.793129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.013 [2024-07-26 18:33:48.793155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.013 qpair failed and we were unable to recover it. 00:33:23.013 [2024-07-26 18:33:48.793297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.013 [2024-07-26 18:33:48.793323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.013 qpair failed and we were unable to recover it. 00:33:23.013 [2024-07-26 18:33:48.793489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.013 [2024-07-26 18:33:48.793515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.013 qpair failed and we were unable to recover it. 00:33:23.013 [2024-07-26 18:33:48.793698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.013 [2024-07-26 18:33:48.793723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.013 qpair failed and we were unable to recover it. 00:33:23.013 [2024-07-26 18:33:48.793880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.013 [2024-07-26 18:33:48.793905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.013 qpair failed and we were unable to recover it. 00:33:23.013 [2024-07-26 18:33:48.794043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.013 [2024-07-26 18:33:48.794074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.013 qpair failed and we were unable to recover it. 00:33:23.013 [2024-07-26 18:33:48.794277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.013 [2024-07-26 18:33:48.794305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.013 qpair failed and we were unable to recover it. 00:33:23.013 [2024-07-26 18:33:48.794467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.013 [2024-07-26 18:33:48.794492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.013 qpair failed and we were unable to recover it. 00:33:23.013 [2024-07-26 18:33:48.794629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.013 [2024-07-26 18:33:48.794654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.013 qpair failed and we were unable to recover it. 00:33:23.013 [2024-07-26 18:33:48.794818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.013 [2024-07-26 18:33:48.794844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.013 qpair failed and we were unable to recover it. 00:33:23.013 [2024-07-26 18:33:48.795014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.013 [2024-07-26 18:33:48.795042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.013 qpair failed and we were unable to recover it. 00:33:23.013 [2024-07-26 18:33:48.795223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.795248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.795376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.795401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.795538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.795579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.795744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.795772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.795926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.795958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.796148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.796174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.796309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.796334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.796553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.796578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.796721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.796746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.796955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.796983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.797139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.797166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.797318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.797343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.797525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.797553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.797741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.797766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.797931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.797957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.798122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.798150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.798328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.798356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.798541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.798566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.798706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.798749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.798895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.798924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.799111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.799139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.799323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.799348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.799559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.799587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.014 [2024-07-26 18:33:48.799759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.014 [2024-07-26 18:33:48.799787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.014 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.799963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.799990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.800161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.800187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.800368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.800396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.800573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.800601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.800808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.800836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.800993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.801018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.801185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.801211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.801374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.801415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.801606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.801634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.801843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.801868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.802050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.802084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.802249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.802275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.802438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.802478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.802638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.802662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.802854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.802880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.803041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.803075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.803263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.803291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.803438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.803463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.803685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.803713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.803891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.803919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.804100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.804129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.804317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.804346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.804529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.015 [2024-07-26 18:33:48.804557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.015 qpair failed and we were unable to recover it. 00:33:23.015 [2024-07-26 18:33:48.804704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.804732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.804918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.804943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.805133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.805159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.805372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.805401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.805566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.805591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.805749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.805791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.805966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.805991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.806164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.806193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.806343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.806370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.806553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.806581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.806763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.806789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.806964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.806992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.807185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.807211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.807356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.807381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.807519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.807545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.807725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.807753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.807900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.807928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.808105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.808134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.808320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.808345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.808483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.808509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.808673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.808714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.808895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.808923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.809108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.809134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.016 [2024-07-26 18:33:48.809301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.016 [2024-07-26 18:33:48.809326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.016 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.809492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.809518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.809671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.809700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.809857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.809882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.810045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.810075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.810267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.810295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.810464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.810491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.810647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.810672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.810852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.810880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.811053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.811087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.811237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.811265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.811430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.811455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.811622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.811647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.811833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.811859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.812094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.812123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.812286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.812312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.812521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.812549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.812766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.812791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.812957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.812982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.813135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.813161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.813303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.813329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.813539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.813567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.813744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.813772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.813974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.017 [2024-07-26 18:33:48.814000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.017 qpair failed and we were unable to recover it. 00:33:23.017 [2024-07-26 18:33:48.814257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.814285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.814450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.814476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.814636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.814676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.814859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.814884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.815067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.815095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.815288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.815313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.815464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.815490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.815648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.815674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.815836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.815877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.816089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.816118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.816369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.816397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.816585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.816609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.816790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.816819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.816973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.817001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.817177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.817205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.817387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.817412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.817600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.817629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.817805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.817833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.817979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.818006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.818211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.818241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.818427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.818455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.818622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.818650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.818837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.818863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.818995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.018 [2024-07-26 18:33:48.819020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.018 qpair failed and we were unable to recover it. 00:33:23.018 [2024-07-26 18:33:48.819228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.819254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.819417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.819459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.819665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.819692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.819852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.819878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.820082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.820108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.820282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.820311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.820468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.820493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.820654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.820679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.820860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.820889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.821037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.821071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.821252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.821277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.821443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.821469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.821654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.821682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.821864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.821890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.822029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.822054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.822228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.822253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.822382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.822408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.822580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.822608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.822758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.822786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.822962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.822987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.823126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.823151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.823359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.823386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.823566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.823598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.019 [2024-07-26 18:33:48.823786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.019 [2024-07-26 18:33:48.823811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.019 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.823984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.824012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.824175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.824201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.824400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.824454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.824620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.824645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.824912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.824962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.825142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.825171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.825337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.825362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.825512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.825537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.825754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.825782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.825988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.826016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.826201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.826229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.826391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.826416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.826583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.826625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.826770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.826798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.826980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.827007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.827201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.827227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.827390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.827415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.827593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.827618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.020 qpair failed and we were unable to recover it. 00:33:23.020 [2024-07-26 18:33:48.827751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.020 [2024-07-26 18:33:48.827776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.827973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.827998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.828213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.828242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.828408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.828432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.828570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.828595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.828782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.828806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.828946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.828987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.829184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.829210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.829373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.829402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.829581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.829606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.829789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.829818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.829961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.829989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.830188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.830217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.830403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.830428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.830588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.830613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.830790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.830817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.831019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.831047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.831214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.831240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.831452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.831480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.831675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.831701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.831827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.831852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.021 [2024-07-26 18:33:48.832015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.021 [2024-07-26 18:33:48.832047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.021 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.832253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.832281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.832470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.832498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.832680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.832708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.832859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.832884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.833081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.833110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.833251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.833279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.833481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.833509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.833690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.833715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.833874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.833903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.834077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.834106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.834271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.834296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.834457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.834482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.834668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.834696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.834857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.834886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.835032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.835065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.835245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.835270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.835479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.835507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.835652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.835680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.835854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.835882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.836065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.836091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.836276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.836304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.836509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.836534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.836671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.836696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.022 qpair failed and we were unable to recover it. 00:33:23.022 [2024-07-26 18:33:48.836880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.022 [2024-07-26 18:33:48.836905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.837083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.837126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.837286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.837311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.837507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.837539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.837751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.837777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.837929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.837957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.838129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.838158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.838354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.838379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.838532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.838557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.838699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.838724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.838888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.838913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.839084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.839113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.839265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.839290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.839499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.839527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.839729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.839754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.839927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.839956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.840113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.840138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.840350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.840379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.840555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.840583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.840760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.840788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.840999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.841024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.841206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.841236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.841423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.841448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.841652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.841679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.841833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.023 [2024-07-26 18:33:48.841858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.023 qpair failed and we were unable to recover it. 00:33:23.023 [2024-07-26 18:33:48.842000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.842026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.842195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.842221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.842363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.842388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.842546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.842571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.842776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.842805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.842978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.843005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.843188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.843217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.843392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.843418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.843636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.843662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.843839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.843867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.844035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.844069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.844275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.844300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.844624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.844675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.844879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.844907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.845164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.845192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.845379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.845404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.845666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.845717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.845931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.845959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.846114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.846143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.846303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.846332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.846471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.846513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.846692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.846720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.846870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.846897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.847081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.024 [2024-07-26 18:33:48.847107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.024 qpair failed and we were unable to recover it. 00:33:23.024 [2024-07-26 18:33:48.847275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.847318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.847488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.847515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.847658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.847683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.847884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.847908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.848049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.848105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.848309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.848337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.848520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.848547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.848700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.848726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.848904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.848932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.849153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.849179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.849313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.849338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.849470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.849495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.849678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.849707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.849873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.849898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.850063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.850088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.850265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.850290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.850466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.850494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.850695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.850723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.850905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.850929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.851091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.851116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.851299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.851328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.851501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.851530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.851711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.851736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.851926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.025 [2024-07-26 18:33:48.851953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.025 qpair failed and we were unable to recover it. 00:33:23.025 [2024-07-26 18:33:48.852138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.852164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.852331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.852374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.852551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.852579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.852766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.852792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.852954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.852979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.853164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.853190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.853348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.853389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.853575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.853601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.853784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.853812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.853986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.854014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.854162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.854191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.854397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.854422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.854618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.854647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.854830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.854858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.855007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.855034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.855225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.855250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.855426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.855454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.855657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.855685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.855883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.855908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.856070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.856096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.856295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.856323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.856489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.856514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.856676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.856717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.856926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.026 [2024-07-26 18:33:48.856951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.026 qpair failed and we were unable to recover it. 00:33:23.026 [2024-07-26 18:33:48.857127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.857156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.857362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.857390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.857535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.857563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.857725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.857751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.857881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.857907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.858066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.858108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.858289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.858317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.858478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.858503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.858712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.858740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.858919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.858947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.859134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.859160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.859318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.859343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.859480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.859506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.859639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.859664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.859858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.859883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.860117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.860146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.860292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.860317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.860510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.860538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.860748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.860776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.860930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.860955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.861089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.861131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.861302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.861330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.861509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.861534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.861698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.027 [2024-07-26 18:33:48.861723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.027 qpair failed and we were unable to recover it. 00:33:23.027 [2024-07-26 18:33:48.861889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.861915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.862068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.862096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.862260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.862284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.862421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.862446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.862634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.862659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.862854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.862882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.863091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.863120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.863306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.863331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.863515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.863541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.863669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.863693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.863868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.863893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.864057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.864087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.864226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.864252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.864386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.864411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.864554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.864595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.864803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.864829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.864982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.865010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.028 qpair failed and we were unable to recover it. 00:33:23.028 [2024-07-26 18:33:48.865199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.028 [2024-07-26 18:33:48.865225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.865385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.865413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.865627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.865653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.865792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.865817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.866021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.866048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.866204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.866233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.866391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.866418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.866626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.866655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.866826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.866854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.867055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.867089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.867273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.867298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.867486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.867514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.867728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.867754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.867878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.867903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.868069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.868095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.868235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.868261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.868390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.868415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.868577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.868602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.868745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.868771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.868934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.868976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.869152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.869180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.869357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.869385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.869594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.869619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.869819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.029 [2024-07-26 18:33:48.869845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.029 qpair failed and we were unable to recover it. 00:33:23.029 [2024-07-26 18:33:48.870046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.870080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.870223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.870250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.870413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.870439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.870657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.870686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.870825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.870853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.871042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.871076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.871228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.871253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.871465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.871493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.871696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.871724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.871904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.871932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.872142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.872167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.872351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.872380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.872588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.872613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.872750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.872775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.872933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.872961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.873144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.873170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.873327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.873369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.873514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.873542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.873720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.873750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.873962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.873991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.874153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.874183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.874362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.874390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.874549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.874574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.030 [2024-07-26 18:33:48.874737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.030 [2024-07-26 18:33:48.874780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.030 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.874949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.874976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.875143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.875171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.875379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.875404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.875593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.875645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.875822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.875850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.876054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.876089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.876243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.876268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.876455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.876483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.876624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.876652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.876845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.876870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.877083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.877109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.877274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.877303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.877514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.877539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.877704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.877729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.877857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.877882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.878067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.878095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.878276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.878305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.878469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.878497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.878677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.878702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.878852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.878880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.879091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.879120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.879324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.879352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.879537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.879562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.031 [2024-07-26 18:33:48.879729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.031 [2024-07-26 18:33:48.879755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.031 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.879885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.879910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.880092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.880121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.880295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.880320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.880488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.880516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.880694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.880722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.880874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.880902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.881079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.881104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.881286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.881314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.881515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.881543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.881723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.881750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.881904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.881929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.882065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.882114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.882329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.882355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.882536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.882563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.882772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.882797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.883015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.883043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.883231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.883259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.883413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.883441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.883617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.883642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.883821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.883849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.884053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.884088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.884233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.884258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.884422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.884447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.032 [2024-07-26 18:33:48.884632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-26 18:33:48.884660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.032 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.884803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.884831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.885015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.885043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.885215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.885240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.885417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.885445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.885619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.885647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.885829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.885857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.886040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.886072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.886261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.886290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.886456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.886481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.886635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.886660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.886844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.886869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.887084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.887113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.887302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.887328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.887491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.887516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.887669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.887697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.887895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.887921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.888128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.888157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.888326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.888353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.888535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.888560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.888769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.888797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.888995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.889023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.889171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.889200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.889355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.889381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.033 [2024-07-26 18:33:48.889588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-26 18:33:48.889616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.033 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.889769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.889797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.889974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.890002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.890188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.890214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.890370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.890398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.890606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.890635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.890815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.890843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.891027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.891052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.891274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.891302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.891512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.891540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.891701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.891730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.891907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.891932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.892095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.892138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.892317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.892345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.892521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.892548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.892727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.892752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.892962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.892990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.893148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.893177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.893380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.893408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.893596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.893621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.893830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.893858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.894004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.894033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.894220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.894249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.894440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.894465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.034 qpair failed and we were unable to recover it. 00:33:23.034 [2024-07-26 18:33:48.894647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.034 [2024-07-26 18:33:48.894675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.894856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.894884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.895062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.895090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.895275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.895300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.895504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.895530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.895695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.895720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.895852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.895894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.896079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.896105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.896275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.896308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.896482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.896509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.896709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.896737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.896929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.896954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.897143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.897172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.897343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.897371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.897589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.897616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.897797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.897822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.897956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.897981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.898140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.035 [2024-07-26 18:33:48.898165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.035 qpair failed and we were unable to recover it. 00:33:23.035 [2024-07-26 18:33:48.898359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.898385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.898549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.898574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.898738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.898763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.898975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.899003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.899221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.899250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.899435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.899460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.899671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.899700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.899919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.899944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.900141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.900166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.900393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.900418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.900605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.900633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.900784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.900813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.900956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.900984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.901192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.901217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.901360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.901386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.901519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.901544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.901683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.901709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.901873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.901902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.902095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.902121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.902323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.902351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.902566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.902591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.902724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.902749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.902912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.902955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.903139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.036 [2024-07-26 18:33:48.903165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.036 qpair failed and we were unable to recover it. 00:33:23.036 [2024-07-26 18:33:48.903372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.903399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.903549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.903574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.903741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.903767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.903945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.903973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.904175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.904203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.904355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.904380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.904544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.904569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.904778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.904807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.904952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.904979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.905188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.905213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.905428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.905456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.905630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.905657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.905837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.905864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.906078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.906103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.906252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.906277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.906435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.906477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.906653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.906681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.906832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.906857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.907048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.907078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.907283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.907310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.907462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.907490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.907645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.907670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.907855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.907883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.037 [2024-07-26 18:33:48.908063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.037 [2024-07-26 18:33:48.908092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.037 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.908264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.908292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.908471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.908495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.908704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.908732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.908940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.908968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.909144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.909172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.909327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.909352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.909513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.909539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.909696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.909736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.909910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.909937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.910146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.910172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.910310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.910339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.910558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.910586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.910755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.910783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.910957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.910985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.911170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.911197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.911335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.911378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.911542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.911567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.911704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.911729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.911937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.911965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.912151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.912180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.912348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.912376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.912534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.912559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.912742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.912771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.912916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.038 [2024-07-26 18:33:48.912944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.038 qpair failed and we were unable to recover it. 00:33:23.038 [2024-07-26 18:33:48.913100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.913129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.913314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.913339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.913529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.913557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.913765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.913791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.913975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.914000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.914157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.914183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.914312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.914353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.914505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.914533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.914713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.914741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.914894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.914919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.915081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.915122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.915328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.915356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.915507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.915535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.915714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.915738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.915958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.915986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.916164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.916193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.916344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.916371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.916546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.916571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.916780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.916808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.916992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.917017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.917190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.917216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.917377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.917403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.917539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.917564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.039 qpair failed and we were unable to recover it. 00:33:23.039 [2024-07-26 18:33:48.917729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.039 [2024-07-26 18:33:48.917754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.917960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.917988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.918148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.918173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.918314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.918356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.918562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.918590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.918728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.918755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.918956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.918981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.919199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.919228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.919373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.919401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.919603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.919630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.919809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.919834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.920018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.920046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.920238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.920263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.920442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.920470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.920646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.920671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.920831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.920856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.920993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.921018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.921182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.921207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.921373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.921398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.921580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.921608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.921819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.921847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.922052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.922088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.922236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.922261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.922401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.922426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.040 qpair failed and we were unable to recover it. 00:33:23.040 [2024-07-26 18:33:48.922584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.040 [2024-07-26 18:33:48.922626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.922814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.922839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.922996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.923021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.923215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.923241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.923428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.923453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.923607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.923632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.923795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.923819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.923983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.924015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.924173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.924198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.924331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.924372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.924529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.924554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.924694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.924719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.924881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.924906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.925125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.925151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.925337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.925362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.925547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.925575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.925757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.925785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.925936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.925964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.926174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.926200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.926356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.926385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.926568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.926596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.926764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.926790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.926928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.926952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.041 [2024-07-26 18:33:48.927163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.041 [2024-07-26 18:33:48.927192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.041 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.927375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.927403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.927578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.927606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.927784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.927809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.927977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.928002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.928216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.928244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.928404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.928432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.928618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.928643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.928780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.928806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.928994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.929022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.929212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.929241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.929397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.929423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.929559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.929584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.929766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.929794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.929960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.929988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.930175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.930201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.930410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.930438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.930605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.930633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.930795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.930860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.042 [2024-07-26 18:33:48.931024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.042 [2024-07-26 18:33:48.931049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.042 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.931220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.931245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.931430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.931458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.931635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.931663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.931813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.931837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.932010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.932039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.932204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.932230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.932398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.932438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.932647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.932672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.932830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.932859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.933005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.933033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.933227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.933253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.933430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.933455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.933613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.933641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.933847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.933875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.934049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.934085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.934213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.934238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.934376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.934419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.934601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.934625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.934811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.934836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.935022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.935047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.935189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.935215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.935346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.935371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.935579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.935607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.935766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.935791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.935959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.043 [2024-07-26 18:33:48.935984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.043 qpair failed and we were unable to recover it. 00:33:23.043 [2024-07-26 18:33:48.936154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.936180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.936362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.936389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.936550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.936575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.936707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.936732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.936922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.936951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.937104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.937133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.937312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.937337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.937496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.937525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.937682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.937708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.937837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.937862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.938034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.938064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.938261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.938286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.938463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.938491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.938640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.938668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.938850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.938874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.939112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.939138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.939278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.939303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.939438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.939463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.939634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.939659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.939798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.939840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.940017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.940045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.940236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.940265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.940416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.940441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.940624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.940651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.940829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.940857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.044 [2024-07-26 18:33:48.941012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.044 [2024-07-26 18:33:48.941041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.044 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.941234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.941260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.941449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.941478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.941654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.941682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.941859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.941887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.942076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.942101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.942236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.942277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.942469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.942497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.942641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.942668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.942828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.942853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.943020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.943046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.943229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.943258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.943419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.943447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.943624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.943649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.943834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.943862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.944047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.944077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.944220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.944244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.944379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.944404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.944544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.944586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.944755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.944783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.944961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.944989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.945158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.945183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.945355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.945399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.945569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.945601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.945755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.945782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.946024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.946052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.045 qpair failed and we were unable to recover it. 00:33:23.045 [2024-07-26 18:33:48.946240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.045 [2024-07-26 18:33:48.946266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.946409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.946435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.946638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.946666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.946850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.946875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.947025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.947052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.947209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.947237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.947443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.947470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.947650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.947675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.947831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.947859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.948035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.948069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.948234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.948262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.948420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.948445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.948587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.948612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.948776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.948801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.948950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.948978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.949160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.949186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.949323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.949349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.949510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.949535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.949737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.949762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.949924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.949949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.950086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.950112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.950277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.950303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.950515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.950540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.950682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.950707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.950889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.950922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.046 qpair failed and we were unable to recover it. 00:33:23.046 [2024-07-26 18:33:48.951128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.046 [2024-07-26 18:33:48.951158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.951311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.951339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.951533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.951558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.951719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.951747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.951922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.951950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.952103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.952131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.952305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.952330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.952505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.952533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.952711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.952739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.952920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.952949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.953135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.953161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.953327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.953352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.953509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.953534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.953731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.953759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.953930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.953958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.954158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.954183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.954322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.954362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.954537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.954565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.954722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.954747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.954882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.954923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.955100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.955129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.955300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.955325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.955496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.955520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.955676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.955704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.955877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.955905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.956086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.047 [2024-07-26 18:33:48.956114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.047 qpair failed and we were unable to recover it. 00:33:23.047 [2024-07-26 18:33:48.956271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.956297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.956451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.956477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.956679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.956707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.956851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.956879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.957087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.957113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.957298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.957326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.957461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.957489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.957640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.957668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.957824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.957849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.958065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.958094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.958277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.958302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.958503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.958531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.958726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.958752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.958972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.959009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.959181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.959214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.959393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.959421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.959582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.959607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.959747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.959773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.959988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.960016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.960199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.960228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.960388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.960413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.960601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.960626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.960801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.960829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.961037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.961076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.961249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.961274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.048 qpair failed and we were unable to recover it. 00:33:23.048 [2024-07-26 18:33:48.961461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.048 [2024-07-26 18:33:48.961490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.961645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.961672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.961826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.961854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.962019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.962048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.962225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.962251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.962432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.962460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.962603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.962631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.962816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.962841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.963029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.963057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.963249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.963274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.963415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.963457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.963645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.963670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.963816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.963841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.964007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.964051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.964247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.964275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.964459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.964483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.964640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.964673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.964854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.964879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.965015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.965040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.965286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.965311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.965536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.965561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.965701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.965743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.965908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.965933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.049 [2024-07-26 18:33:48.966065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.049 [2024-07-26 18:33:48.966091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.049 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.966228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.966270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.966452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.966480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.966666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.966693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.966853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.966878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.967070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.967107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.967258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.967286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.967495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.967521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.967658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.967682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.967872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.967901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.968085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.968120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.968260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.968303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.968497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.968521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.968674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.968703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.968857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.968886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.969065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.969108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.969282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.969307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.969502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.969530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.969708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.969737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.969892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.969919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.970111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.970137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.970299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.970327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.970505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.970532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.970685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.970713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.970871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.970896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.971036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.971067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.971275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.971300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.971504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.050 [2024-07-26 18:33:48.971532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.050 qpair failed and we were unable to recover it. 00:33:23.050 [2024-07-26 18:33:48.971719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.971744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.971882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.971907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.972041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.972073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.972207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.972248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.972428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.972453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.972600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.972629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.972830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.972859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.972995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.973020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.973206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.973232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.973366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.973391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.973530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.973555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.973719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.973744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.973909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.973935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.974068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.974094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.974238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.974263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.974393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.974417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.974563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.974588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.974740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.974765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.974962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.974989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.975179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.975205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.975352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.975377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.975564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.975589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.975750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.975775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.975904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.975930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.976067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.976092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.976269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.976294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.976451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.976476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.976647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.976673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.976816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.976841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.051 qpair failed and we were unable to recover it. 00:33:23.051 [2024-07-26 18:33:48.977003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.051 [2024-07-26 18:33:48.977031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.977171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.977197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.977332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.977357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.977492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.977518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.977707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.977736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.977918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.977943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.978075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.978106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.978275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.978300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.978464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.978489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.978611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.978636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.978798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.978822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.979010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.979038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.979207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.979233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.979369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.979394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.979558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.979583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.979754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.979780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.979921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.979946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.980112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.980137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.980271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.980297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.980427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.980452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.980640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.980665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.980802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.980828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.980988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.981015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.981201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.981227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.981361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.981386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.981578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.981603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.981740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.981766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.981937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.981964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.982131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.982158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.982290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.052 [2024-07-26 18:33:48.982315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.052 qpair failed and we were unable to recover it. 00:33:23.052 [2024-07-26 18:33:48.982502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.982526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.982710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.982735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.982873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.982915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.983094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.983120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.983252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.983277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.983439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.983465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.983604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.983629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.983771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.983796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.983921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.983946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.984085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.984111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.984256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.984281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.984444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.984469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.984603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.984628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.984819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.984845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.984980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.985005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.985163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.985193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.985330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.985355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.985544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.985569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.985736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.985761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.985920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.985951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.986112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.986138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.986279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.986305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.986480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.986505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.986662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.986687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.986827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.986853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.986981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.987006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.987140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.987167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.987352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.987377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.987512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.987537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.053 [2024-07-26 18:33:48.987712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.053 [2024-07-26 18:33:48.987738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.053 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.987868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.987893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.988080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.988106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.988241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.988266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.988405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.988430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.988591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.988616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.988753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.988778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.988940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.988965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.989130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.989156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.989319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.989344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.989479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.989504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.989637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.989661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.989846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.989872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.990009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.990036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.990212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.990238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.990372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.990397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.990538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.990563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.990729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.990754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.990895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.990920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.991065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.991090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.991227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.991253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.991390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.991415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.991556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.991581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.991709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.991734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.991944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.991972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.992166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.992192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.992329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.992354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.992498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.992523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.992686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.992712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.992870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.054 [2024-07-26 18:33:48.992895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.054 qpair failed and we were unable to recover it. 00:33:23.054 [2024-07-26 18:33:48.993031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.055 [2024-07-26 18:33:48.993056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.055 qpair failed and we were unable to recover it. 00:33:23.055 [2024-07-26 18:33:48.993235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.055 [2024-07-26 18:33:48.993260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.055 qpair failed and we were unable to recover it. 00:33:23.055 [2024-07-26 18:33:48.993398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.055 [2024-07-26 18:33:48.993424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.055 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.993588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.993613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.993771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.993796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.993930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.993955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.994120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.994147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.994282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.994307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.994438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.994463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.994623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.994648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.994787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.994812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.994979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.995005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.995134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.995160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.995296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.995321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.995486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.995514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.995742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.995770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.995942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.995969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.996172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.996200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.996389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.996417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.996622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.996650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.996796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.996821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.997006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.997031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.997177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.997202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.997340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.997365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.997527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.997556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.997718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.997744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.997941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.997967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.998119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.998145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.998279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.998304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.998430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.998455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.998616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.998641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.998783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.056 [2024-07-26 18:33:48.998808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.056 qpair failed and we were unable to recover it. 00:33:23.056 [2024-07-26 18:33:48.998965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:48.998990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:48.999191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:48.999217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:48.999385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:48.999411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:48.999541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:48.999566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:48.999729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:48.999753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:48.999894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:48.999919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.000071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.000097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.000231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.000257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.000397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.000423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.000582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.000607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.000746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.000772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.000904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.000929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.001086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.001112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.001241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.001267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.001437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.001463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.001599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.001624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.001791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.001816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.001976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.002001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.002176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.002202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.002370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.002396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.002543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.002568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.002748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.002772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.002919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.002947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.003128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.003162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.003302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.003327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.003462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.003488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.003631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.003656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.003814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.003839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.004003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.004029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.004179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.004205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.004335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.004360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.057 qpair failed and we were unable to recover it. 00:33:23.057 [2024-07-26 18:33:49.004489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.057 [2024-07-26 18:33:49.004514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.004677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.004702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.004864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.004893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.005066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.005092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.005253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.005279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.005440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.005467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.005618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.005643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.005784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.005810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.005947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.005988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.006162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.006188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.006351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.006376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.006536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.006562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.006691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.006717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.006845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.006870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.007013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.007037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.007206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.007231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.007394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.007420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.007606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.007634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.007807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.007835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.008010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.008039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.008192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.008233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.008402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.008432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.008640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.008667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.008882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.008910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.009090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.009116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.009245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.009270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.009411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.009438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.009574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.009600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.009737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.009763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.009926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.009955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.010122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.010149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.010290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.010316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.010450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.010475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.010631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.058 [2024-07-26 18:33:49.010656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.058 qpair failed and we were unable to recover it. 00:33:23.058 [2024-07-26 18:33:49.010788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.010814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.010955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.010980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.011116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.011141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.011283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.011309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.011494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.011519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.011657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.011682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.011869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.011894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.012066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.012092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.012250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.012275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.012421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.012446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.012607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.012632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.012766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.012791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.012950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.012976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.013166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.013192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.013334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.013359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.013518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.013543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.013714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.013739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.013905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.013930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.014090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.014116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.014242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.014267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.014436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.014461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.014638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.014664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.014799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.014823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.015018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.015044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.015180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.015205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.015396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.015422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.015582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.015607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.015796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.015821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.015984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.016011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.016206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.016233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.016377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.016403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.016564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.016589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.016755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.016781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.016940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.059 [2024-07-26 18:33:49.016968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.059 qpair failed and we were unable to recover it. 00:33:23.059 [2024-07-26 18:33:49.017115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.017141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.017305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.017330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.017516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.017545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.017703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.017729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.017890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.017915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.018081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.018107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.018274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.018299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.018485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.018511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.018666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.018691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.018831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.018856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.019044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.019076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.019244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.019269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.019438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.019463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.019631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.019656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.019827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.019853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.020012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.020037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.020190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.020216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.020379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.020404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.020538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.020564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.020706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.020732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.020855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.020881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.021034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.021065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.021234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.021259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.021492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.021547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.021734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.021759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.021935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.021964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.022147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.022174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.022341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.022374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.022537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.022563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.022703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.022733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.022870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.022896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.023086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.023122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.023261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.023287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.023428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.060 [2024-07-26 18:33:49.023453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.060 qpair failed and we were unable to recover it. 00:33:23.060 [2024-07-26 18:33:49.023615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.023641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.023833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.023859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.023996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.024021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.024190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.024216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.024355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.024380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.024511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.024538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.024704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.024730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.024906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.024934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.025104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.025133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.025291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.025316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.025451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.025492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.025694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.025723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.025902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.025930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.026107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.026133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.026344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.026373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.026523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.026551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.026728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.026755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.026911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.026937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.027082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.027109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.027298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.027324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.027477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.027505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.027689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.027714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.027884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.027910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.028100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.028129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.028331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.028360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.028551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.028576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.028759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.028788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.061 qpair failed and we were unable to recover it. 00:33:23.061 [2024-07-26 18:33:49.028964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.061 [2024-07-26 18:33:49.028992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.029200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.029228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.029404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.029430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.029590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.029619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.029835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.029863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.030036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.030073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.030295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.030320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.030496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.030525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.030715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.030742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.030901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.030934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.031114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.031156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.031308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.031334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.031535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.031564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.031751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.031777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.031909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.031935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.032074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.032125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.032298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.032326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.032504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.032532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.032683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.032708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.032897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.032923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.033071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.033114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.033288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.033314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.033471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.033497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.033693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.033722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.033889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.033917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.034120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.034149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.034311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.034338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.034520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.034549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.034752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.034780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.034969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.034994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.035185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.035211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.035401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.035429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.035608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.035636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.035815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.035843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.036054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.036085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.036249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.062 [2024-07-26 18:33:49.036277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.062 qpair failed and we were unable to recover it. 00:33:23.062 [2024-07-26 18:33:49.036432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.036464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.036634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.036662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.036835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.036860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.037022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.037080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.037288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.037317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.037473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.037502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.037676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.037701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.037870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.037895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.038056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.038107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.038253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.038282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.038460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.038486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.038622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.038647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.038855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.038880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.039042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.039075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.039238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.039264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.039413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.039441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.039641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.039670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.039837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.039865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.040041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.040079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.040268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.040294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.040431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.040456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.040586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.040611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.040800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.040825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.040990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.041018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.041196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.041223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.041403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.041432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.041643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.041668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.041900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.041956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.042195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.042221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.042377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.042407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.042591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.042616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.042799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.042828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.042978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.043006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.043146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.043175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.043326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.043351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.043515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.063 [2024-07-26 18:33:49.043561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.063 qpair failed and we were unable to recover it. 00:33:23.063 [2024-07-26 18:33:49.043751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.043778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.043939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.043982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.044166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.044192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.044385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.044410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.044579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.044608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.044792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.044824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.044983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.045008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.045145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.045171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.045359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.045387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.045602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.045627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.045762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.045787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.045978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.046005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.046163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.046189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.046315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.046341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.046525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.046550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.046807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.046861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.047071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.047114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.047244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.047269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.047408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.047433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.047601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.047627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.047815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.047843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.048011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.048039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.048208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.048234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.048418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.048446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.048597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.048626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.048800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.048829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.049010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.049037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.049219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.049246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.049378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.049421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.049598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.049626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.049803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.049828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.050012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.050040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.050196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.050224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.050401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.050429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.050633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.050658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.050847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.050876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.051028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.064 [2024-07-26 18:33:49.051066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.064 qpair failed and we were unable to recover it. 00:33:23.064 [2024-07-26 18:33:49.051222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.051251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.051406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.051431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.051597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.051640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.051837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.051863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.052025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.052050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.052188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.052213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.052371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.052399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.052612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.052638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.052781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.052806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.052942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.052968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.053156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.053182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.053370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.053399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.053544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.053572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.053755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.053780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.054001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.054030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.054223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.054249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.054395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.054420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.054557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.054582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.054747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.054773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.054931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.054957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.055093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.055119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.055247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.055273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.055454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.055482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.055666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.055695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.055871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.055900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.056055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.056088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.056223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.056265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.056441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.056467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.056666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.056694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.056880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.056906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.057120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.057150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.057339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.057365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.057571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.057599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.057755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.057780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.057955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.057984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.058152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.058178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.058314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.058343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.058479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.065 [2024-07-26 18:33:49.058505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.065 qpair failed and we were unable to recover it. 00:33:23.065 [2024-07-26 18:33:49.058692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.058720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.058919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.058944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.059105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.059131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.059294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.059319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.059482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.059510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.059686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.059714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.059857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.059885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.060073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.060117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.060253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.060279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.060462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.060490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.060667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.060696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.060857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.060882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.061025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.061052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.061261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.061289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.061468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.061497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.061705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.061731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.061912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.061940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.062082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.062112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.062286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.062314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.062470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.062495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.062684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.062709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.062889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.062914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.063127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.063156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.063345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.063370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.063546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.063572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.063729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.063754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.063973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.063999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.064168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.064194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.064412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.064441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.064618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.066 [2024-07-26 18:33:49.064646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.066 qpair failed and we were unable to recover it. 00:33:23.066 [2024-07-26 18:33:49.064832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.064860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.065079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.065106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.065266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.065292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.065471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.065499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.065657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.065686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.065863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.065888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.066056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.066090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.066269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.066297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.066483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.066509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.066701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.066726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.066881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.066910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.067112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.067138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.067272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.067297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.067423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.067449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.067619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.067662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.067842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.067870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.068045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.068080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.068238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.068263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.068435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.068461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.068677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.068704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.068886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.068914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.069077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.069110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.069267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.069295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.069501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.069526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.069709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.069737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.069937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.069962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.070104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.070130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.070293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.070333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.070515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.070542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.070702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.070727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.070935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.070963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.071130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.071156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.071314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.071339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.071494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.071519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.071732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.071760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.071937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.071965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.072161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.072194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.067 [2024-07-26 18:33:49.072362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.067 [2024-07-26 18:33:49.072387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.067 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.072598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.072626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.072838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.072863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.073041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.073076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.073244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.073268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.073412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.073437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.073594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.073619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.073771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.073796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.073989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.074014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.074171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.074197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.074361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.074402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.074572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.074598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.074738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.074763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.074935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.074961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.075182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.075211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.075401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.075429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.075596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.075621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.075829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.075857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.076010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.076038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.076207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.076235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.076459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.076484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.076644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.076672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.076849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.076877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.077039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.077086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.077246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.077271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.077449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.077477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.077659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.077687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.077895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.077923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.078078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.078104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.078287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.078315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.078464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.078494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.078670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.078698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.078908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.078933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.079141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.079170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.079313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.079341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.079503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.079527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.079714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.079739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.079920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.079948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.068 [2024-07-26 18:33:49.080128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.068 [2024-07-26 18:33:49.080154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.068 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.080289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.080314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.080481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.080510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.080666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.080694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.080873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.080902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.081045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.081081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.081261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.081286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.081503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.081532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.081732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.081760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.081937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.081965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.082133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.082159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.082321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.082366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.082541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.082569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.082712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.082740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.082947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.082972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.083158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.083187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.083346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.083374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.083555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.083583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.083762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.083788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.084000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.084028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.084221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.084247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.084387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.084413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.084574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.084599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.084782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.084810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.084956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.084983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.085157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.085186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.085369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.085395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.085539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.085565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.085725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.085751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.085939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.085971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.086149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.086176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.086324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.086352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.086527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.086555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.086724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.086752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.086937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.086963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.087179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.087208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.087396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.087424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.087611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.087636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.087791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.087817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.088025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.069 [2024-07-26 18:33:49.088054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.069 qpair failed and we were unable to recover it. 00:33:23.069 [2024-07-26 18:33:49.088234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.088263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.088450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.088478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.088630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.088656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.088807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.088832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.089020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.089045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.089243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.089269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.089433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.089458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.089635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.089663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.089844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.089885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.090049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.090081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.090271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.090296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.090452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.090480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.090680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.090707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.090884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.090912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.091067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.091092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.091271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.091300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.091476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.091503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.091680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.091707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.091883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.091908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.092085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.092124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.092279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.092307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.092456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.092484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.092641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.092667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.092875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.092903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.093115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.093141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.093316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.093344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.093496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.093521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.093680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.093722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.093890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.093915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.094104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.094133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.094310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.094339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.094502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.094527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.094729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.094757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.070 [2024-07-26 18:33:49.094934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.070 [2024-07-26 18:33:49.094962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.070 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.095124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.095150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.095330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.095358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.095533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.095561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.095708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.095735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.095913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.095938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.096116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.096145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.096302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.096330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.096538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.096563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.096753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.096778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.096916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.096942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.097132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.097161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.097324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.097352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.097530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.097555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.097731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.097760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.097968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.097993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.098157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.098182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.098369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.098394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.098581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.098610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.098814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.098842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.099034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.099065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.099255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.099280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.099458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.099487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.099663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.099691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.099874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.099906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.100093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.100119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.100261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.100289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.100478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.100504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.100712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.100740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.100901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.100926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.101092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.101121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.101268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.101297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.101448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.101476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.101663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.101688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.101898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.101926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.102092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.102118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.102285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.102328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.102508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.102533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.102723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.102752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.102926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.102955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.103125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.103150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.103337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.103370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.103552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.103581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.103751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.071 [2024-07-26 18:33:49.103780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.071 qpair failed and we were unable to recover it. 00:33:23.071 [2024-07-26 18:33:49.103986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.104015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.104206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.104232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.104369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.104395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.104600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.104628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.104787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.104815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.104970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.104996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.105216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.105245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.105423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.105451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.105660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.105688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.105876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.105902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.106091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.106125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.106263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.106291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.106469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.106498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.106659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.106685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.106829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.106854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.107041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.107073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.107272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.107301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.107524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.107549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.107762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.107791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.107976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.108002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.108139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.108165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.108329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.108359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.108558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.108584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.108744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.108770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.108949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.108977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.109167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.109193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.109377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.109406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.109586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.109611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.109773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.109797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.109982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.110008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.110197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.110227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.110435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.110463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.110601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.110629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.110814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.110840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.111051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.111086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.111277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.111305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.111473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.111501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.111690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.111715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.111881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.111907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.112085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.112114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.112267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.112295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.112493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.112518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.112701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.112729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.112907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.112935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.113110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.113139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.113299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.113324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.113495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.113521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.113686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.113711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.113866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.113913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.114102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.114128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.114281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.114309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.114502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.114527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.114693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.114718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.114854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.114879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.115090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.115129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.115316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.115342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.115523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.115551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.115736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.115762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.072 qpair failed and we were unable to recover it. 00:33:23.072 [2024-07-26 18:33:49.115970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.072 [2024-07-26 18:33:49.115998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.116167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.116193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.116324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.116349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.116575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.116600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.116803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.116829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.116959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.116985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.117152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.117178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.117335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.117360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.117508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.117550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.117731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.117758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.117963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.117991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.118163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.118189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.118330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.118356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.118522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.118566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.118718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.118746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.118933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.118959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.119146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.119176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.119354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.119382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.119539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.119567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.119756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.119781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.119981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.120007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.120170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.120196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.120324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.120366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.120579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.120604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.120816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.120844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.121006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.121031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.121188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.121214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.121377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.121402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.121560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.121586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.121789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.121818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.122007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.122032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.122167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.122197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.122327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.122369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.122574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.122602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.122789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.122817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.122996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.123024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.123184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.123210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.123386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.123414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.123566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.123594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.123779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.123804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.123991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.124020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.124239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.124265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.124480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.124540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.124727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.124752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.124946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.124974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.125156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.125186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.125375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.125443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.125663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.125688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.125842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.125870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.126057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.126089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.126230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.126272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.126459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.126485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.126664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.126693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.126893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.126922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.127082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.127111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.127324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.127350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.127561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.127590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.127741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.127769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.073 [2024-07-26 18:33:49.127962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-26 18:33:49.127987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.073 qpair failed and we were unable to recover it. 00:33:23.074 [2024-07-26 18:33:49.128184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-26 18:33:49.128211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.074 qpair failed and we were unable to recover it. 00:33:23.074 [2024-07-26 18:33:49.128409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-26 18:33:49.128435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.074 qpair failed and we were unable to recover it. 00:33:23.074 [2024-07-26 18:33:49.128569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-26 18:33:49.128597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.074 qpair failed and we were unable to recover it. 00:33:23.074 [2024-07-26 18:33:49.128767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-26 18:33:49.128795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.074 qpair failed and we were unable to recover it. 00:33:23.074 [2024-07-26 18:33:49.128980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-26 18:33:49.129005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.074 qpair failed and we were unable to recover it. 00:33:23.074 [2024-07-26 18:33:49.129219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-26 18:33:49.129248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.074 qpair failed and we were unable to recover it. 00:33:23.074 [2024-07-26 18:33:49.129424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-26 18:33:49.129453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.074 qpair failed and we were unable to recover it. 00:33:23.074 [2024-07-26 18:33:49.129635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-26 18:33:49.129663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.074 qpair failed and we were unable to recover it. 00:33:23.074 [2024-07-26 18:33:49.129850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-26 18:33:49.129875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.074 qpair failed and we were unable to recover it. 00:33:23.074 [2024-07-26 18:33:49.130056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-26 18:33:49.130093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.074 qpair failed and we were unable to recover it. 00:33:23.074 [2024-07-26 18:33:49.130284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-26 18:33:49.130310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.074 qpair failed and we were unable to recover it. 00:33:23.074 [2024-07-26 18:33:49.130435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-26 18:33:49.130475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.074 qpair failed and we were unable to recover it. 00:33:23.074 [2024-07-26 18:33:49.130666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-26 18:33:49.130691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.074 qpair failed and we were unable to recover it. 00:33:23.360 [2024-07-26 18:33:49.130908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.360 [2024-07-26 18:33:49.130938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.360 qpair failed and we were unable to recover it. 00:33:23.360 [2024-07-26 18:33:49.131150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.360 [2024-07-26 18:33:49.131179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.360 qpair failed and we were unable to recover it. 00:33:23.360 [2024-07-26 18:33:49.131335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.360 [2024-07-26 18:33:49.131363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.360 qpair failed and we were unable to recover it. 00:33:23.360 [2024-07-26 18:33:49.131521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.360 [2024-07-26 18:33:49.131546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.360 qpair failed and we were unable to recover it. 00:33:23.360 [2024-07-26 18:33:49.131756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.360 [2024-07-26 18:33:49.131784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.360 qpair failed and we were unable to recover it. 00:33:23.360 [2024-07-26 18:33:49.131987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.132015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.132176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.132205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.132392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.132418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.132623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.132653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.132826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.132854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.133056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.133093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.133271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.133296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.133452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.133480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.133682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.133710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.133869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.133897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.134087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.134114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.134277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.134303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.134479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.134507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.134659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.134687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.134841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.134866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.135050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.135086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.135279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.135304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.135453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.135479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.135612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.135637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.135837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.135865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.136018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.136046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.136236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.136264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.136429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.136459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.136654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.136679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.136818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.136843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.136998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.137040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.137239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.361 [2024-07-26 18:33:49.137264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.361 qpair failed and we were unable to recover it. 00:33:23.361 [2024-07-26 18:33:49.137447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.137476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.137633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.137661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.137814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.137842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.138024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.138049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.138225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.138268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.138417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.138445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.138631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.138656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.138815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.138840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.139006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.139032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.139203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.139229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.139394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.139422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.139577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.139602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.139734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.139776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.139922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.139950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.140109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.140135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.140270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.140297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.140480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.140509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.140656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.140684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.140843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.140871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.141129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.141155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.141318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.141360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.141542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.141570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.141722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.141749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.141906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.141931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.142119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.142147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.142339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.142364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.142499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.142524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.142661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.142686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.142825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.142851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.143033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.143070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.143237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.143262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.362 [2024-07-26 18:33:49.143392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.362 [2024-07-26 18:33:49.143418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.362 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.143581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.143606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.143745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.143770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.143927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.143968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.144144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.144170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.144319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.144351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.144492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.144521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.144673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.144701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.144864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.144889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.145055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.145109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.145325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.145357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.145545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.145570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.145764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.145789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.145970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.145998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.146188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.146213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.146341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.146382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.146562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.146587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.146764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.146792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.146968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.146997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.147213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.147241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.147407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.147433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.147598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.147624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.147780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.147805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.147937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.147978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.148195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.148221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.148417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.148446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.148591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.148620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.148797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.148822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.148962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.148987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.149152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.149178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.363 [2024-07-26 18:33:49.149376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.363 [2024-07-26 18:33:49.149404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.363 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.149610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.149638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.149820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.149849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.150029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.150057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.150254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.150282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.150461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.150489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.150676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.150701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.150863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.150888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.151075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.151108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.151289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.151314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.151456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.151482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.151672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.151697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.151822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.151847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.152051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.152088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.152302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.152327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.152515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.152544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.152750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.152779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.152950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.152975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.153132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.153158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.153367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.153395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.153582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.153610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.153814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.153839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.154023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.154050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.154217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.154242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.154437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.154466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.154637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.154663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.154798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.154824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.154980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.155005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.364 [2024-07-26 18:33:49.155220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.364 [2024-07-26 18:33:49.155249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.364 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.155464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.155489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.155656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.155682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.155866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.155916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.156099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.156128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.156301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.156328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.156537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.156562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.156724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.156749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.156934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.156961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.157109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.157138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.157325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.157350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.157563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.157591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.157776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.157804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.157982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.158009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.158177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.158202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.158367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.158415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.158620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.158648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.158790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.158818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.158998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.159022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.159188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.159214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.159375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.159400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.159566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.159591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.159757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.159784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.159969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.159997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.160206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.160235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.160383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.160411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.160597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.160622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.160832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.160860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.161034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.161069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.161225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.161253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.161408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.161433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.161588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.161613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.161827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.161856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.162076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.162105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.365 qpair failed and we were unable to recover it. 00:33:23.365 [2024-07-26 18:33:49.162266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.365 [2024-07-26 18:33:49.162291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.162472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.162501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.162678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.162707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.162879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.162907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.163142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.163168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.163381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.163410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.163586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.163614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.163796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.163823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.164001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.164030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.164256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.164285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.164460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.164488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.164662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.164689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.164839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.164864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.165040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.165075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.165288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.165316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.165504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.165529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.165713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.165738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.165953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.165981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.166160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.166189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.166361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.366 [2024-07-26 18:33:49.166389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-07-26 18:33:49.166598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.166623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.166773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.166801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.166960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.166988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.167169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.167199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.167390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.167415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.167635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.167663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.167874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.167903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.168087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.168116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.168298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.168323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.168535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.168563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.168737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.168765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.168967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.168994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.169148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.169178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.169365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.169394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.169538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.169566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.169777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.169802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.169948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.169973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.170199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.170228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.170408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.170436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.170577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.170605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.170762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.170788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.171000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.171028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.171234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.171259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.171403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.171430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.171654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.171679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.171858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.171887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.172086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.172115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.172295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.172320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.172455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.172480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.172689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.172722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.172885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.172910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.173070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.173122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.173295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.367 [2024-07-26 18:33:49.173320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-07-26 18:33:49.173532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.173560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.173735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.173763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.173962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.173989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.174175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.174201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.174341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.174367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.174572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.174601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.174752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.174780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.174957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.174983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1620083 Killed "${NVMF_APP[@]}" "$@" 00:33:23.368 [2024-07-26 18:33:49.175199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.175229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.175438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.175468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.175630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.175655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.175841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.175867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.176002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.176027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.176176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:23.368 [2024-07-26 18:33:49.176202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.176414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:23.368 [2024-07-26 18:33:49.176442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.176596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.176621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:23.368 [2024-07-26 18:33:49.176755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.176798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.176973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.177001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:23.368 [2024-07-26 18:33:49.177191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.177217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.177361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.177386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.177550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.177576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.177797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.177825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.177976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.178004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.178192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.178218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.178362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.178388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.178544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.178585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.178756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.178784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.178967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.178992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.368 [2024-07-26 18:33:49.179150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.368 [2024-07-26 18:33:49.179191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.368 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.179406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.179432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.179585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.179610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.179797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.179822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.180010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.180039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.180219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.180247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.180438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.180468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.180599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.180624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.180766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.180791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.180957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.180998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.181184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.181213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.181371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.181396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.181679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.181736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.181939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.181967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.182122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1620631 00:33:23.369 [2024-07-26 18:33:49.182151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1620631 00:33:23.369 [2024-07-26 18:33:49.182338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.182364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.182521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1620631 ']' 00:33:23.369 [2024-07-26 18:33:49.182549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.369 [2024-07-26 18:33:49.182728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.182761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:23.369 [2024-07-26 18:33:49.182940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.182970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.369 [2024-07-26 18:33:49.183117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.183154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:23.369 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:23.369 [2024-07-26 18:33:49.183339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.183365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.183509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.183553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.183758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.183783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.369 qpair failed and we were unable to recover it. 00:33:23.369 [2024-07-26 18:33:49.183943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.369 [2024-07-26 18:33:49.183972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.184129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.184155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.184314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.184361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.184533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.184559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.184719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.184763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.184943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.184970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.185114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.185141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.185290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.185316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.185488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.185516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.185696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.185722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.185922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.185947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.186138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.186164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.186325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.186351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.186520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.186548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.186737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.186763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.186942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.186971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.187135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.187161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.187298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.187323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.187517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.187542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.187724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.187757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.187898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.187926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.188108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.188137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.188294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.188320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.188506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.188535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.188680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.188709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.188855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.188883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.189069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.189095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.189229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.189254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.189428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.189470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.189656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.189682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.189850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.189875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.190013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.190039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95a4b0 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.190223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.190263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.370 [2024-07-26 18:33:49.190450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.370 [2024-07-26 18:33:49.190478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.370 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.190720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.190748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.190937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.190966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.191165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.191193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.191353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.191383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.191612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.191638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.191832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.191861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.192010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.192040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.192261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.192290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.192429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.192455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.192624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.192685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.192868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.192897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.193077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.193121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.193261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.193297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.193512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.193566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.193868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.193920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.194123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.194150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.194312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.194339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.194527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.194556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.194697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.194725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.194912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.194943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.195103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.195130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.195298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.195325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.195491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.195518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.195653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.195679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.195863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.195891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.196066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.196093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.196233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.196258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.196440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.196469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.196639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.196666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.196878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.196907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.197088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.197140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.197291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.371 [2024-07-26 18:33:49.197316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.371 qpair failed and we were unable to recover it. 00:33:23.371 [2024-07-26 18:33:49.197455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.197481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.197668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.197694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.197854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.197885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.198188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.198214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.198377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.198403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.198615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.198644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.198794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.198823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.199013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.199038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.199243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.199270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.199427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.199456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.199601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.199643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.199825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.199853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.200011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.200037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.200198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.200234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.200402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.200429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.200602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.200632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.200793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.200818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.200997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.201042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.201249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.201275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.201435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.201475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.201650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.201680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.201854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.201880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.202015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.202041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.202228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.202254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.202424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.202450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.202592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.202618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.202811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.202840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.372 [2024-07-26 18:33:49.203026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.372 [2024-07-26 18:33:49.203055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.372 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.203247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.203273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.203428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.203457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.203635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.203664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.203842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.203871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.204064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.204092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.204261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.204290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.204501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.204537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.204697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.204727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.204928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.204953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.205100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.205133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.205331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.205379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.205598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.205627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.205834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.205861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.206024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.206054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.206237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.206266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.206445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.206474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.206665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.206691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.206851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.206879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.207100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.207126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.207306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.207332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.207504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.207531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.207722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.207752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.207956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.207984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.208165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.208195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.208358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.208385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.208552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.208594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.208782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.208811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.208989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.209019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.209193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.209220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.209367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.209394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.209558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.209612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.209800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.209828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.210013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.373 [2024-07-26 18:33:49.210042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.373 qpair failed and we were unable to recover it. 00:33:23.373 [2024-07-26 18:33:49.210246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.210274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.210474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.210511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.210679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.210705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.210891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.210916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.211117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.211148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.211305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.211331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.211516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.211544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.211730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.211756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.211910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.211938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.212090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.212120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.212334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.212363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.212520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.212547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.212746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.212776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.212945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.212971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.213144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.213170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.213313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.213339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.213574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.213604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.213790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.213816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.213985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.214011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.214191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.214217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.214377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.214407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.214581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.214610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.214790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.214816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.214981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.215006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.215164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.215194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.215406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.215431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.215623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.215663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.215857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.215883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.216028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.216054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.216213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.216239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.216458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.216495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.216659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.216686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.216881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.216910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.374 [2024-07-26 18:33:49.217063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.374 [2024-07-26 18:33:49.217106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.374 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.217248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.217275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.217469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.217501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.217738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.217764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.217944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.217973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.218151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.218179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.218346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.218375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.218550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.218576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.218713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.218739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.218908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.218933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.219072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.219098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.219261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.219289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.219496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.219524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.219700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.219730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.219916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.219941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.220152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.220181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.220370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.220395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.220540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.220584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.220767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.220810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.221010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.221038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.221325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.221354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.221539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.221567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.221755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.221781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.221967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.221996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.222182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.222211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.222383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.222411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.222564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.222590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.222777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.222805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.223008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.223048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.223245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.223271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.223437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.223462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.223627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.223661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.223853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.223882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.375 [2024-07-26 18:33:49.224075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.375 [2024-07-26 18:33:49.224101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.375 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.224260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.224287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.224562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.224592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.224850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.224880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.225032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.225071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.225228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.225254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.225438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.225466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.225671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.225701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.225855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.225884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.226033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.226075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.226234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.226263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.226428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.226467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.226652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.226680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.226841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.226871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.227036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.227067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.227233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.227273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.227455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.227483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.227663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.227689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.227841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.227867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.228034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.228064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.228244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.228273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.228462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.228488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.228677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.228703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.228887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.228923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.229110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.229139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.229312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.229338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.229526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.229556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.229736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.229765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.229937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.229965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.230146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.230173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.230312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.230339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.230523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.230552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.230734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.230764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.230941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.376 [2024-07-26 18:33:49.230967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.376 qpair failed and we were unable to recover it. 00:33:23.376 [2024-07-26 18:33:49.231119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.231145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.231326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.231355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.231399] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:23.377 [2024-07-26 18:33:49.231473] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.377 [2024-07-26 18:33:49.231531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.231572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.231731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.231756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.231923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.231971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.232195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.232223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.232384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.232412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.232582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.232607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.232748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.232775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.232933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.232959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.233221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.233248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.233424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.233450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.233662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.233729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.233919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.233948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.234093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.234122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.234312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.234339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.234495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.234522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.234681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.234724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.234878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.234915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.235097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.235123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.235270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.235300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.235500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.235537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.235719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.235747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.235902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.235928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.236084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.236129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.236334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.236361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.236528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.236554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.377 [2024-07-26 18:33:49.236686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.377 [2024-07-26 18:33:49.236712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.377 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.236879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.236909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.237069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.237114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.237255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.237281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.237465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.237492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.237684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.237713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.237867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.237896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.238113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.238144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.238326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.238362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.238628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.238678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.238855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.238883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.239077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.239107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.239298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.239325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.239522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.239552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.239713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.239742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.239954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.239983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.240232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.240259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.240531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.240583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.240873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.240906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.241083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.241112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.241296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.241330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.241569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.241625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.241821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.241860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.242040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.242075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.242246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.242272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.242438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.242468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.242664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.242690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.242856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.242883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.243023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.243066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.243357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.243386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.243569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.243597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.243812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.243868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.244138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.244164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.244311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.378 [2024-07-26 18:33:49.244353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.378 qpair failed and we were unable to recover it. 00:33:23.378 [2024-07-26 18:33:49.244550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.244580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.244823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.244875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.245031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.245063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.245356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.245386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.245570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.245597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.245758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.245794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.245962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.245988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.246171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.246200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.246387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.246413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.246597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.246654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.246839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.246864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.247080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.247110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.247287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.247317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.247497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.247527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.247709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.247734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.247887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.247930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.248147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.248173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.248338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.248381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.248543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.248571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.248755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.248783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.248973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.249000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.249189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.249219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.249387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.249413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.249608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.249637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.249879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.249913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.250108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.250138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.250326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.250360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.250559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.250589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.250794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.250822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.250974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.251002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.251257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.251285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.251460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.251500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.251709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.251738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.379 [2024-07-26 18:33:49.251932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.379 [2024-07-26 18:33:49.251967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.379 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.252130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.252156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.252317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.252346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.252551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.252580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.252732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.252765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.252931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.252957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.253099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.253126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.253284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.253327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.253514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.253544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.253699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.253725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.253939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.253968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.254175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.254208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.254397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.254426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.254589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.254624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.254820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.254849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.255056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.255104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.255236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.255263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.255429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.255456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.255652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.255681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.255874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.255903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.256108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.256141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.256289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.256315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.256493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.256523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.256702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.256732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.256916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.256946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.257107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.257133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.257275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.257301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.257470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.257515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.257664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.257692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.257887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.257916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.258167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.258195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.258352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.258381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.258529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.258569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.258828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.380 [2024-07-26 18:33:49.258858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.380 qpair failed and we were unable to recover it. 00:33:23.380 [2024-07-26 18:33:49.259034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.259075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.259244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.259270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.259454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.259480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.259653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.259680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.259979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.260010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.260206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.260232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.260400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.260426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.260563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.260599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.260765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.260795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.260971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.260999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.261168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.261202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.261363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.261389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.261603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.261631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.261809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.261838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.262020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.262048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.262207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.262233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.262401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.262426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.262637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.262665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.262850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.262879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.263091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.263117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.263318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.263347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.263530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.263558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.263768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.263796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.263954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.263980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.264170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.264200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.264354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.264382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.264586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.264614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.264799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.264824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.265014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.265043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.265252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.265281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.265479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.265508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.265671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.265696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.265888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.265914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.381 qpair failed and we were unable to recover it. 00:33:23.381 [2024-07-26 18:33:49.266118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.381 [2024-07-26 18:33:49.266148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.266325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.266353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.266509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.266535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.266727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.266752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.266893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.266918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.267123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.267152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.382 [2024-07-26 18:33:49.267340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.267366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.267543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.267571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.267773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.267801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.267979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.268007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.268254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.268280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.268478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.268506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.268684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.268712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.268858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.268888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.269108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.269134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.269320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.269349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.269552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.269580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.269761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.269790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.269979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.270005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.270138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.270164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.270334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.270360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.270503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.270529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.270662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.270687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.270879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.270905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.271092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.271094] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:23.382 [2024-07-26 18:33:49.271118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.271275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.271300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.271468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.271493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.271636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.271661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.382 qpair failed and we were unable to recover it. 00:33:23.382 [2024-07-26 18:33:49.271795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.382 [2024-07-26 18:33:49.271820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.272009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.272034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.272211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.272237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.272398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.272423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.272598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.272623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.272762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.272787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.272915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.272941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.273122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.273149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.273288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.273313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.273481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.273506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.273641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.273666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.273826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.273851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.274037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.274078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.274212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.274238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.274406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.274432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.274597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.274627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.274788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.274813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.274983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.275008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.275145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.275171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.275342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.275367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.275526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.275551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.275739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.275764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.275923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.275949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.276123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.276149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.276338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.276363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.276517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.276542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.276680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.276706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.276849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.276876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.277066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.277092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.277243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.277268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.277414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.277441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.277607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.277633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.277769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.277795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.277957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.383 [2024-07-26 18:33:49.277983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.383 qpair failed and we were unable to recover it. 00:33:23.383 [2024-07-26 18:33:49.278139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.278165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.278322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.278347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.278511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.278536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.278712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.278737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.278872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.278898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.279089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.279116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.279302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.279327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.279487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.279512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.279651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.279678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.279816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.279842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.279999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.280024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.280167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.280193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.280331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.280356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.280524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.280550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.280686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.280711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.280871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.280896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.281053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.281083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.281225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.281251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.281412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.281437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.281605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.281630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.281764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.281790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.281954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.281984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.282176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.282202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.282368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.282393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.282554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.282579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.282725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.282751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.282915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.282941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.283097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.283123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.283260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.283285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.283419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.283444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.283631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.283657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.283822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.283847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.284005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.284031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.384 [2024-07-26 18:33:49.284198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.384 [2024-07-26 18:33:49.284224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.384 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.284379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.284404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.284545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.284570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.284728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.284753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.284921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.284947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.285117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.285144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.285282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.285307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.285469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.285495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.285662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.285688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.285819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.285845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.286006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.286032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.286203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.286229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.286394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.286419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.286607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.286632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.286765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.286791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.286955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.286982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.287176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.287202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.287366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.287392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.287575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.287600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.287762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.287788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.287928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.287953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.288121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.288147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.288282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.288308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.288441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.288467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.288627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.288652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.288841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.288867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.289023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.289049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.289219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.289245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.289406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.289435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.289571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.289597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.289789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.289815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.290004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.290029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.290226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.385 [2024-07-26 18:33:49.290252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.385 qpair failed and we were unable to recover it. 00:33:23.385 [2024-07-26 18:33:49.290419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.290445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.290579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.290606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.290772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.290798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.290937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.290963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.291124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.291150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.291286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.291312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.291499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.291525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.291684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.291710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.291898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.291923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.292072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.292099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.292231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.292257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.292406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.292432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.292597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.292622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.292766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.292792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.292962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.292987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.293130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.293156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.293321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.293347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.293509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.293535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.293701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.293726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.293866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.293893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.294070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.294096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.294256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.294281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.294423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.294450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.294639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.294665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.294827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.294852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.295012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.295037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.295180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.295206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.295396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.295422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.295584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.295610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.295773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.295798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.295959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.295984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.296129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.296155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.296344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.386 [2024-07-26 18:33:49.296369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.386 qpair failed and we were unable to recover it. 00:33:23.386 [2024-07-26 18:33:49.296509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.296534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.296672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.296698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.296833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.296862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.297030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.297056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.297200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.297226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.297390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.297415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.297615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.297641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.297798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.297824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.297980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.298005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.298167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.298194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.298366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.298392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.298548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.298573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.298711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.298737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.298871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.298897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.299089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.299115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.299310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.299336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.299532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.299557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.299697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.299723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.299868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.299893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.300083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.300110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.300300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.300326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.300488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.300514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.300679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.300705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.300869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.300895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.301033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.301064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.301236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.301261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 [2024-07-26 18:33:49.301268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.301453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.301478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.301642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.301667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.301798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.301824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.301967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.301993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.302181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.387 [2024-07-26 18:33:49.302207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.387 qpair failed and we were unable to recover it. 00:33:23.387 [2024-07-26 18:33:49.302367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.302393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.302581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.302607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.302776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.302801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.302964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.302990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.303123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.303149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.303338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.303363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.303489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.303514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.303655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.303681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.303838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.303864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.304052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.304083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.304245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.304270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.304412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.304438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.304600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.304626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.304892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.304917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.305124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.305150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.305289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.305316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.305479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.305505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.305668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.305694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.305853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.305878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.306043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.306082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.306271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.306296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.306462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.306487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.306676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.306702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.306867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.306893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.307052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.307087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.307255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.307281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.307443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.307468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.307630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.307654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.307847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.307872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.308005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.308030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.388 [2024-07-26 18:33:49.308190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.388 [2024-07-26 18:33:49.308215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.388 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.308401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.308425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.308562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.308587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.308833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.308857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.309047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.309077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.309242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.309266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.309430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.309455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.309589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.309617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.309790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.309816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.309992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.310018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.310189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.310217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.310363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.310390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.310535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.310561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.310749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.310774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.310942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.310980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.311147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.311174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.311339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.311366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.311508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.311533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.311704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.311730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.311869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.311895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.312069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.312096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.312248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.312277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.312447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.312472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.312631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.312656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.312817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.312844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.313010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.313036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.313192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.313218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.313374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.313400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.313581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.313608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.313748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.313774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.313962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.313989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.314135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.314161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.314325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.314352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.314519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.314546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.389 qpair failed and we were unable to recover it. 00:33:23.389 [2024-07-26 18:33:49.314683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.389 [2024-07-26 18:33:49.314718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.314916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.314942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.315077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.315124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.315295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.315320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.315493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.315519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.315681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.315707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.315850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.315876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.316042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.316073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.316220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.316247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.316417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.316443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.316639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.316665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.316831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.316858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.317010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.317037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.317231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.317258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.317439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.317467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.317634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.317659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.317830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.317856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.318044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.318094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.318232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.318258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.318422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.318447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.318640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.318667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.318847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.318874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.319008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.319035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.319219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.319246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.319415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.319441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.319586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.319611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.319805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.319833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.320002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.320028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.320197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.320224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.320387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.320414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.320581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.320607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.320752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.320778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.320945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.320971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.390 [2024-07-26 18:33:49.321127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.390 [2024-07-26 18:33:49.321155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.390 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.321287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.321313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.321476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.321501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.321646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.321673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.321841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.321867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.322050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.322082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.322249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.322284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.322485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.322515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.322661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.322687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.322855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.322882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.323048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.323080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.323221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.323247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.323411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.323437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.323604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.323630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.323797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.323824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.323990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.324016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.324166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.324193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.324355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.324381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.324525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.324559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.324708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.324734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.324925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.324952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.325094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.325122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.325270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.325296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.325440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.325465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.325604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.325630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.325794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.325820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.325958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.325985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.326133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.326159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.326350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.326377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.326537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.326573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.326710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.326735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.326922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.326948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.327092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.327119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.327287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.391 [2024-07-26 18:33:49.327313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.391 qpair failed and we were unable to recover it. 00:33:23.391 [2024-07-26 18:33:49.327478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.327504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.327688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.327714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.327848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.327873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.328033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.328070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.328209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.328235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.328404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.328430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.328617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.328643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.328817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.328844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.329011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.329036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.329235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.329261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.329412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.329439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.329604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.329630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.329766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.329794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.329970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.330002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.330157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.330183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.330344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.330371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.330506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.330533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.330727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.330753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.330914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.330940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.331078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.331104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.331268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.331295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.331461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.331488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.331655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.331681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.331843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.331868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.332030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.332057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.332235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.332261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.332422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.332449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.332615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.332641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.332785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.332811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.332952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.332979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.333154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.333180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.333325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.333351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.333526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.333551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.392 qpair failed and we were unable to recover it. 00:33:23.392 [2024-07-26 18:33:49.333688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.392 [2024-07-26 18:33:49.333714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.333908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.333933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.334082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.334108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.334268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.334294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.334457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.334483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.334641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.334667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.334832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.334857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.335046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.335077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.335244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.335271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.335458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.335484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.335614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.335639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.335773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.335798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.335972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.335999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.336152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.336178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.336368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.336393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.336529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.336555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.336720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.336746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.336904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.336929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.337063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.337090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.337259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.337285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.337447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.337474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.337644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.337670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.337810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.337835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.337967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.337992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.338156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.338183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.338343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.338369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.338535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.338560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.338697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.338723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.338891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.393 [2024-07-26 18:33:49.338918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.393 qpair failed and we were unable to recover it. 00:33:23.393 [2024-07-26 18:33:49.339102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.339128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.339258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.339284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.339439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.339464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.339599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.339625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.339780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.339806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.339998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.340024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.340189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.340216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.340377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.340402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.340571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.340597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.340737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.340764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.340954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.340980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.341120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.341146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.341277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.341303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.341462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.341487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.341680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.341706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.341843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.341870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.342028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.342054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.342226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.342252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.342415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.342446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.342608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.342634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.342823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.342849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.343008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.343035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.343204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.343230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.343365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.343392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.343581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.343607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.343744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.343769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.343930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.343956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.344087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.344114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.344276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.344302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.344463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.344489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.344652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.344678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.344866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.344893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.345091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.345117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.394 qpair failed and we were unable to recover it. 00:33:23.394 [2024-07-26 18:33:49.345288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.394 [2024-07-26 18:33:49.345314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.345454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.345481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.345621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.345647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.345784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.345811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.345976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.346002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.346140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.346166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.346303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.346330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.346584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.346610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.346752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.346778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.346947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.346973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.347165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.347192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.347352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.347378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.347582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.347607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.347798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.347824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.347998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.348025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.348191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.348218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.348382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.348408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.348572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.348598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.348856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.348882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.349015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.349041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.349211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.349237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.349399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.349425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.349590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.349621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.349780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.349807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.349953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.349979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.350138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.350168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.350340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.350366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.350511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.350537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.350692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.350717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.350877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.350903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.351069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.351095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.351229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.351255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.351417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.351442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.395 qpair failed and we were unable to recover it. 00:33:23.395 [2024-07-26 18:33:49.351578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.395 [2024-07-26 18:33:49.351604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.351771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.351798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.351964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.351990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.352180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.352206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.352346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.352372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.352584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.352610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.352781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.352807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.352972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.352998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.353161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.353187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.353322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.353348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.353513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.353538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.353674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.353701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.353897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.353923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.354090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.354117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.354279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.354305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.354438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.354463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.354652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.354678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.354842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.354868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.355032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.355064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.355256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.355282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.355479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.355504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.355651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.355677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.355876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.355901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.356069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.356096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.356224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.356250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.356420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.356446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.356639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.356665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.356805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.356831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.357024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.357049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.357200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.357226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.357362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.357388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.357535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.357561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.357748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.357779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.396 [2024-07-26 18:33:49.357944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.396 [2024-07-26 18:33:49.357969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.396 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.358163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.358189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.358342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.358368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.358529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.358555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.358693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.358719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.358875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.358901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.359065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.359092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.359286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.359311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.359475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.359501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.359664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.359690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.359827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.359852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.360017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.360044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.360211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.360237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.360411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.360437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.360597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.360623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.360754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.360780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.360921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.360947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.361117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.361144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.361300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.361326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.361483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.361509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.361700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.361726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.361889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.361915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.362085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.362112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.362255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.362281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.362447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.362473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.362608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.362636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.362780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.362806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.362970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.362996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.363192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.363219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.363404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.363430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.363571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.363597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.363729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.363754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.363912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.363938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.364098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.364125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.397 qpair failed and we were unable to recover it. 00:33:23.397 [2024-07-26 18:33:49.364266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.397 [2024-07-26 18:33:49.364292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.364450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.364475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.364639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.364666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.364855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.364881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.365017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.365043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.365189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.365219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.365387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.365413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.365537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.365563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.365724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.365750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.365937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.365963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.366112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.366139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.366306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.366332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.366499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.366525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.366655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.366681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.366853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.366880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.367053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.367085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.367246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.367272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.367435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.367461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.367651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.367677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.367843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.367869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.368030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.368056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.368227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.368253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.368419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.368445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.368578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.368604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.368736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.368762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.368926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.368952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.369117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.369143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.369304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.369331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.398 [2024-07-26 18:33:49.369491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.398 [2024-07-26 18:33:49.369517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.398 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.369684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.369711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.369872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.369898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.370084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.370110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.370255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.370280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.370420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.370445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.370606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.370632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.370817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.370842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.371002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.371028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.371197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.371223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.371386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.371412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.371597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.371623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.371753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.371779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.371940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.371965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.372127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.372154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.372340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.372367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.372531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.372556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.372714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.372745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.372881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.372908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.373041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.373072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.373240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.373267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.373400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.373426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.373591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.373618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.373777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.373804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.373937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.373963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.374131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.374157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.374328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.374353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.374519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.374545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.374703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.374728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.374871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.374898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.375038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.375071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.375268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.375294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.375429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.375454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.375596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.399 [2024-07-26 18:33:49.375622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.399 qpair failed and we were unable to recover it. 00:33:23.399 [2024-07-26 18:33:49.375763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.375789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.375945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.375971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.376156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.376182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.376371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.376397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.376564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.376590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.376752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.376779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.376941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.376967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.377131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.377158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.377320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.377346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.377505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.377531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.377697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.377723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.377885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.377911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.378080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.378108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.378293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.378319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.378462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.378488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.378654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.378680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.378842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.378867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.379065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.379091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.379253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.379279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.379467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.379493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.379640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.379666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.379827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.379852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.379982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.380009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.380198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.380228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.380390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.380417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.380605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.380630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.380788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.380814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.380997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.381023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.381201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.381228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.381418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.381444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.381580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.381605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.381737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.381763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.381928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.381954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.400 [2024-07-26 18:33:49.382147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.400 [2024-07-26 18:33:49.382173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.400 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.382339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.382365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.382507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.382533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.382723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.382749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.382890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.382917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.383103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.383130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.383316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.383342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.383480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.383507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.383641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.383668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.383858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.383884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.384022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.384048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.384211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.384237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.384373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.384399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.384559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.384585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.384717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.384743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.384936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.384963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.385135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.385161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.385342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.385368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.385502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.385528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.385671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.385697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.385856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.385882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.386044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.386077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.386265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.386290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.386450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.386475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.386609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.386636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.386800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.386825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.386980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.387006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.387173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.387200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.387336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.387362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.387524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.387551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.387717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.387747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.387908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.387933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.388093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.388120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.388284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.401 [2024-07-26 18:33:49.388310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.401 qpair failed and we were unable to recover it. 00:33:23.401 [2024-07-26 18:33:49.388471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.388496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.388686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.388712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.388898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.388923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.389056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.389086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.389273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.389298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.389468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.389495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.389652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.389678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.389812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.389837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.389968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.389995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.390160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.390187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.390355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.390381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.390510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.390535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.390702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.390727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.390867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.390894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.391031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.391056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.391197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.391223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.391354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.391380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.391546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.391572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.391700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.391726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.391852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.391878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.392001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.402 [2024-07-26 18:33:49.392034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.402 [2024-07-26 18:33:49.392039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.392049] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.402 [2024-07-26 18:33:49.392069] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.402 [2024-07-26 18:33:49.392070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 [2024-07-26 18:33:49.392081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.392164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:23.402 [2024-07-26 18:33:49.392237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.392262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.392220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:23.402 [2024-07-26 18:33:49.392223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:23.402 [2024-07-26 18:33:49.392194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:23.402 [2024-07-26 18:33:49.392441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.392466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.392633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.392659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.392791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.392817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.392979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.393005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.393142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.393169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.393330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.393356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.393491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.393517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.393654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.393680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.393846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.402 [2024-07-26 18:33:49.393872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.402 qpair failed and we were unable to recover it. 00:33:23.402 [2024-07-26 18:33:49.394005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.394031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.394169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.394196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.394370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.394400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.394556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.394582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.394714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.394739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.394874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.394899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.395033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.395090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.395253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.395279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.395453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.395479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.395672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.395697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.395839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.395864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.396040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.396071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.396212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.396239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.396379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.396405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.396549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.396575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.396713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.396740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.396906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.396933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.397098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.397124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.397258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.397284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.397416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.397442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.397603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.397629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.397797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.397824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.397961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.397996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.398157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.398184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.398357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.398383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.398527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.398553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.398709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.398735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.398865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.398891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.399074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.399100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.399245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.399271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.399432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.399457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.399621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.403 [2024-07-26 18:33:49.399647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.403 qpair failed and we were unable to recover it. 00:33:23.403 [2024-07-26 18:33:49.399808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.399834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.399979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.400005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.400151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.400177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.400303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.400329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.400473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.400498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.400637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.400664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.400799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.400825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.400959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.400984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.401119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.401146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.401275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.401301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.401469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.401498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.401637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.401663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.401830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.401855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.401995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.402022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.402183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.402209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.402366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.402393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.402525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.402550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.402685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.402712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.402854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.402880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.403038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.403089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.403219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.403245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.403381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.403406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.403539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.403565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.403697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.403723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.403896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.403922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.404052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.404084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.404214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.404241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.404396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.404422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.404552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.404578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.404764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.404790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.404924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.404949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.405084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.405109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.405244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.405270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.405399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.405425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.405556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.405581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.405743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.404 [2024-07-26 18:33:49.405769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.404 qpair failed and we were unable to recover it. 00:33:23.404 [2024-07-26 18:33:49.405898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.405923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.406075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.406102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.406262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.406289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.406432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.406458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.406630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.406655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.406791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.406818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.406987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.407012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.407142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.407169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.407342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.407367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.407528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.407554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.407694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.407719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.407879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.407905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.408027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.408053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.408187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.408212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.408343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.408373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.408562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.408587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.408721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.408747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.408891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.408916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.409043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.409073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.409215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.409241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.409400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.409426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.409564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.409589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.409763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.409788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.409920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.409946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.410081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.410107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.410241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.410266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.410404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.410429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.410568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.410594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.410729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.410754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.410900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.410925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.411084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.411110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.411280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.411306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.411468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.411493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.411652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.411678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.411841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.411866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.411999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.412024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.405 [2024-07-26 18:33:49.412158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.405 [2024-07-26 18:33:49.412185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.405 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.412340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.412366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.412509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.412534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.412672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.412699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.412840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.412865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.413005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.413031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.413170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.413196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.413335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.413360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.413499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.413524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.413686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.413711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.413874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.413900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.414033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.414065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.414209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.414234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.414397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.414422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.414547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.414572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.414714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.414739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.414869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.414894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.415034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.415087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.415225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.415256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.415385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.415411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.415547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.415573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.415741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.415766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.415897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.415922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.416063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.416090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.416225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.416251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.416401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.416426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.416699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.416725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.416890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.416916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.417056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.417086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.417231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.417256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.417381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.417406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.417551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.417577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.417711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.417736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.417876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.417901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.418032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.418063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.418223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.418249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.406 qpair failed and we were unable to recover it. 00:33:23.406 [2024-07-26 18:33:49.418410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.406 [2024-07-26 18:33:49.418435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.418568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.418593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.418731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.418757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.419004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.419030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.419250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.419277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.419421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.419447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.419575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.419600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.419815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.419841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.420003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.420028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.420166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.420192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.420353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.420378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.420552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.420578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.420720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.420746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.420906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.420931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.421100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.421127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.421274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.421300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.421474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.421500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.421632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.421658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.421798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.421824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.421954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.421980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.422145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.422171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.422305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.422331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.422462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.422492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.422629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.422655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.422825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.422851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.422990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.423016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.423183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.423209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.423354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.423380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.423526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.423552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.423684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.423709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.423850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.423875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.424045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.424076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.424212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.424238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.424364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.424389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.424539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.424565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.424703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.424731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.424927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.424953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.425117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.407 [2024-07-26 18:33:49.425143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.407 qpair failed and we were unable to recover it. 00:33:23.407 [2024-07-26 18:33:49.425287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.425312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.425453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.425479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.425640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.425667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.425815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.425841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.425971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.425996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.426174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.426200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.426329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.426361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.426496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.426522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.426660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.426685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.426811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.426836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.426994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.427019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.427219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.427246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.427381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.427408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.427561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.427586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.427715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.427740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.427895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.427920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.428099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.428125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.428264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.428290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.428428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.428454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.428588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.428614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.428754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.428779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.428914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.428941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.429111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.429137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.429270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.429296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.429444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.429473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.429605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.429631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.429756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.429782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.429943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.429968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.430099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.430125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.430295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.430321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.430487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.430513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.430674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.430699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.430836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.430861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.431023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.431049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.408 qpair failed and we were unable to recover it. 00:33:23.408 [2024-07-26 18:33:49.431192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.408 [2024-07-26 18:33:49.431218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.431347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.431372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.431502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.431527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.431683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.431708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.431865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.431891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.432023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.432047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.432220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.432246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.432370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.432396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.432557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.432583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.432722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.432748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.432905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.432931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.433069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.433095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.433263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.433288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.433438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.433463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.433619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.433644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.433773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.433798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.433954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.433980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.434125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.434159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.434314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.434341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.434499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.434526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.434688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.434718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.434898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.434924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.435098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.435124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.435263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.435298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.435456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.435482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.435619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.435646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.435797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.435823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.435970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.435996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.436144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.436171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.436300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.436331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.436484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.436513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.436673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.436699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.436832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.436857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.437021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.437056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.437212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.437238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.437490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.437516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.437689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.437724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.437878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.437904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.438045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.438077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.409 qpair failed and we were unable to recover it. 00:33:23.409 [2024-07-26 18:33:49.438218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.409 [2024-07-26 18:33:49.438245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.438411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.438436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.438578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.438613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.438762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.438789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.438942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.438968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.439211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.439237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.439399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.439426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.439570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.439596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.439752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.439778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.439922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.439948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.440085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.440111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.440277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.440304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.440448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.440474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.440622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.440647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.440790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.440817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.440970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.440996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.441142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.441169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.441320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.441347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.441501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.441527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.441670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.441697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.441841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.441867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.442006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.442032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.442177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.442203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.442349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.442374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.442524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.442559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.442714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.442740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.442878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.442903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.443042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.443074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.443223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.443248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.443394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.443420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.443579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.443604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.443753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.443783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.443919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.443946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.444121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.444147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.444281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.444316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.444500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.444526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.444663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.444689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.444814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.444840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.444974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.445000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.445155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.445181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.445335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.445361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.410 qpair failed and we were unable to recover it. 00:33:23.410 [2024-07-26 18:33:49.445528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.410 [2024-07-26 18:33:49.445554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.445715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.445740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.445869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.445894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.446037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.446073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.446237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.446263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.446431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.446456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.446602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.446627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.446784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.446810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.446945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.446972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.447123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.447149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.447295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.447320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.447471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.447497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.447664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.447689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.447851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.447877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.448013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.448039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.448226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.448253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.448388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.448413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.448575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.448601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.448755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.448780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.448925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.448951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.449126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.449153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.449286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.449311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.449450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.449476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.449627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.449653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.449782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.449807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.449959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.449984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.450120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.450145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.450284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.450311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.450481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.450508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.450647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.450672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.450804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.450833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.450976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.451001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.451135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.451161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.451320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.451346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.451486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.451511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.451670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.451696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.451860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.451885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.452014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.452039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.452194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.452220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.452378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.452404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.452597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.452633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.452779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.411 [2024-07-26 18:33:49.452805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.411 qpair failed and we were unable to recover it. 00:33:23.411 [2024-07-26 18:33:49.453046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.453077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.453208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.453234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.453503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.453528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.453665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.453690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.453837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.453862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.454000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.454027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.454198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.454224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.454354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.454379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.454509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.454534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.454669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.454694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.454837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.454862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.455052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.455084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.455231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.455256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.455423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.455449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.455599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.455624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.455781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.455806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.455946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.455972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.456118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.456145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.456303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.456329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.456461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.456486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.456631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.456656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.456813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.456839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.456996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.457023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.457191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.457217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.457369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.457394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.457540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.457565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.457708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.457734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.457866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.457891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.458033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.458068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.458216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.458243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.458385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.458411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.458546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.458572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.458705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.458731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.458887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.458913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.459046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.459103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.459294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.459319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.459456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.412 [2024-07-26 18:33:49.459482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.412 qpair failed and we were unable to recover it. 00:33:23.412 [2024-07-26 18:33:49.459633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.459659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.459807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.459832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.459963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.459990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.460146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.460172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.460312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.460337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.460479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.460505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.460642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.460671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.460818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.460844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.460991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.461019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.461200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.461226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.461369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.461395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.461529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.461555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.461722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.461748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.461876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.461901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.462027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.462053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.462199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.462224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.462365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.462390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.462525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.462550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.462696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.462721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.462888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.462914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.463051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.463082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.463245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.463270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.463407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.463433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.463578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.463604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.463802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.463827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.463968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.463994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.464160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.464187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.464327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.464352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.464485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.464511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.464665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.464690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.464864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.464891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.465023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.465053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.465223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.465249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.465381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.465407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.465561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.465586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.465708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.465734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.465924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.465950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.466185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.466211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.466350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.466383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.466526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.466551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.466686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.466712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.466844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.466869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.467007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.467032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.467237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.467263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.467402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.467428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.467607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.467633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-07-26 18:33:49.467793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-07-26 18:33:49.467818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.467974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.468000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.468148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.468175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.468337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.468363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.468523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.468548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.468682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.468707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.468844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.468870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.468996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.469021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.469160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.469185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.469318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.469343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.469481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.469507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.469672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.469716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.469883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.469912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.470051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.470085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.470338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.470365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.470532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.470559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.470708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.470735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.470884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.470913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.471056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.471095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.471225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.471251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.471498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.471525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.471675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.471702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.471845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.471875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.472051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.472082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.472221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.472248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.472407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.472438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.472595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.472622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.472762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.472788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.473033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.473066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.473206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.473234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.473378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.473405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.473545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.473573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.473730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.473757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.473903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.473930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.474107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.474135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.474287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.474315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.474454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.474481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.474648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.474674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.474817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.474844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.475014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.475040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.475191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.475218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.475363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.475391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.475528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.475556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.475745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.475771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.475930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.475957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.476110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.476137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.476312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.476339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.476477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.476503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.476700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.476727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.476869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.476900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.477043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.477076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.477241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.477269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.477409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.477437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.477588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.477614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-07-26 18:33:49.477794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-07-26 18:33:49.477822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.415 [2024-07-26 18:33:49.477959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-07-26 18:33:49.477987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-07-26 18:33:49.478151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-07-26 18:33:49.478180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.478313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.478339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.478532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.478558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.478716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.478743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.478881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.478909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.479067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.479094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.479240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.479267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.479409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.479436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.479603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.479630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.479875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.479906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.480043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.480087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.480266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.480292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.480462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.480500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.480647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.480674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.480809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.480848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.481012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.481045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.481206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.481235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.481380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.481407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.688 [2024-07-26 18:33:49.481563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.688 [2024-07-26 18:33:49.481590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.688 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.481721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.481747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.481908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.481935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.482075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.482106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.482250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.482278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.482427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.482454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.482626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.482652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.482810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.482836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.482973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.483001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.483143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.483170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.483312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.483339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.483480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.483506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.483648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.483674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.483840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.483867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.484026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.484052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.484241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.484267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.484407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.484434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.484567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.484593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.484755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.484790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.484968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.484994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.485131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.485157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.485288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.485314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.485481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.485507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.485684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.485710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.485884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.485910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.486110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.486137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.486298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.486325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.486466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.486492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.486663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.486691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.486853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.486879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.487013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.487039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.487185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.487213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.689 [2024-07-26 18:33:49.487385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.689 [2024-07-26 18:33:49.487411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.689 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.487555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.487587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.487757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.487783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.487918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.487944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.488080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.488106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.488273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.488299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.488463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.488490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.488623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.488648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.488792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.488818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.488983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.489009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.489150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.489176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.489317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.489343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.489538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.489564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.489709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.489735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.489875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.489901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.490028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.490053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.490197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.490223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.490386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.490412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.490572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.490597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.490720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.490745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.490889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.490915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.491077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.491103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.491248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.491273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.491413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.491438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.491610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.491636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.491802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.491827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.491964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.491994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.492137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.492163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.492331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.492357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.492487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.690 [2024-07-26 18:33:49.492513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.690 qpair failed and we were unable to recover it. 00:33:23.690 [2024-07-26 18:33:49.492685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.492710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.492842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.492869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.493016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.493042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.493185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.493210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.493377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.493402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.493565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.493590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.493752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.493777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.493901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.493926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.494063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.494089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.494230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.494256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.494418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.494444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.494604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.494629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.494767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.494793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.494919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.494944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.495072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.495098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.495259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.495284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.495412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.495437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.495602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.495628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.495770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.495796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.495921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.495946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.496079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.496106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.496242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.496268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.496439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.496465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.496612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.496639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.496765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.496791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.496935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.496960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.497100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.497135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.497268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.497294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.497439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.497464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.497623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.497648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.497787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.497813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.497979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.498004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.498167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.691 [2024-07-26 18:33:49.498193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.691 qpair failed and we were unable to recover it. 00:33:23.691 [2024-07-26 18:33:49.498341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.498366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.498496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.498522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.498689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.498714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.498876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.498906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.499049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.499079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.499228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.499253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.499411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.499436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.499571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.499597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.499725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.499750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.499888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.499913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.500052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.500098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.500254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.500282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.500445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.500471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.500626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.500652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.500815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.500841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.500989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.501015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.501197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.501224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.501370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.501397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.501535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.501561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.501720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.501746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.501884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.501909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.502042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.502077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.502240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.502266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.502430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.502457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.502590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.502616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.502809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.502834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.502970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.502996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.503152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.503178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.503313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.503338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.503480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.503506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.503677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.503703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.503834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.503860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.692 [2024-07-26 18:33:49.504039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.692 [2024-07-26 18:33:49.504071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.692 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.504238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.504264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.504402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.504429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.504590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.504616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.504765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.504791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.504943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.504969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.505130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.505157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.505292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.505319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.505464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.505490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.505628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.505654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.505814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.505841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.506007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.506037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.506180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.506206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.506343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.506369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.506525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.506551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.506707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.506733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.506892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.506917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.507054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.507085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.507218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.507244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.507402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.507428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.507569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.507595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.507729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.507755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.507892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.507917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.508072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.508098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.693 [2024-07-26 18:33:49.508264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.693 [2024-07-26 18:33:49.508289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.693 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.508463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.508488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.508646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.508671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.508819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.508845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.509028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.509054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.509233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.509259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.509400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.509425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.509603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.509628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.509766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.509793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.509926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.509952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.510122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.510148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.510327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.510354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.510500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.510526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.510690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.510715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.510863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.510889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.511020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.511045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.511220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.511246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.511404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.511430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.511607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.511632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.511794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.511819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.511951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.511976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.512125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.512151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.512327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.512352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.512495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.512521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.512671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.512698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.512865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.512892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.513066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.513095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.513233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.513264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.513398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.513423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.513559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.513585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.513719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.694 [2024-07-26 18:33:49.513745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.694 qpair failed and we were unable to recover it. 00:33:23.694 [2024-07-26 18:33:49.513887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.513913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.514071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.514097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.514249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.514274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.514402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.514427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.514591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.514616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.514747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.514772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.514940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.514966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.515115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.515143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.515281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.515308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.515454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.515480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.515634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.515661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.515851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.515877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.516013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.516040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.516221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.516247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.516407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.516433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.516597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.516622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.516756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.516783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.516931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.516958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.517133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.517159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.517308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.517334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.517485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.517510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.517672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.517698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.517835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.517861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.518026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.518051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.518203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.518229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.518364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.518390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.518552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.518578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.518710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.518735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.518867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.518893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.695 qpair failed and we were unable to recover it. 00:33:23.695 [2024-07-26 18:33:49.519031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.695 [2024-07-26 18:33:49.519056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.519209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.519234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.519365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.519392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.519524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.519550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.519691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.519717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.519867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.519892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.520057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.520088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.520253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.520283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.520410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.520435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.520567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.520593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.520759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.520785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.520945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.520971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.521116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.521142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.521304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.521329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.521498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.521523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.521673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.521699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.521844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.521869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.522034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.522065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.522209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.522234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.522371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.522396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.522524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.522549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.522718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.522744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.522877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.522904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.523066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.523092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.523273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.523300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.523454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.523481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.523632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.523660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.523805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.523831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.523994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.524020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.524176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.524203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.524338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.524364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-07-26 18:33:49.524526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-07-26 18:33:49.524552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.524694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.524719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.524851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.524876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.525051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.525095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.525265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.525290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.525422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.525448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.525613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.525639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.525772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.525798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.525941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.525967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.526133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.526159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.526328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.526355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.526504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.526530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.526693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.526718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.526877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.526903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.527041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.527073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.527234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.527260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.527395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.527427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.527562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.527589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.527739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.527765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.527934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.527960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.528134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.528160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.528313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.528339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.528473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.528499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.528852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.528877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.529019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.529045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.529187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.529213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.529353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.529380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.529531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.529557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.529712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.529739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.529899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.529925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.530079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.530106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.530252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.530277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-07-26 18:33:49.530439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-07-26 18:33:49.530465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:23.698 [2024-07-26 18:33:49.530594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.530621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:23.698 [2024-07-26 18:33:49.530814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.530841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.531002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.531028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:23.698 [2024-07-26 18:33:49.531204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.531232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.531413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.531440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:23.698 [2024-07-26 18:33:49.531610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.531636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:23.698 [2024-07-26 18:33:49.531804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.531831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.531992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.532018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.532190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.532217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.532352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.532378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.532524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.532550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.532678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.532704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.532882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.532908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.533083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.533110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.533256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.533282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.533431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.533457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.533613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.533640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.533782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.533809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.533970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.533996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.534143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.534170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.534326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.534353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.534486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.534516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.534675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.534701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.534877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.534912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.535045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.535078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.535242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.535269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.535448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.535474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.535606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.535633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-07-26 18:33:49.535768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-07-26 18:33:49.535795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.535948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.535973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.536122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.536148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.536311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.536336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.536501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.536527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.536679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.536705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.536862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.536888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.537039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.537071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.537210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.537236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.537367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.537393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.537551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.537577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.537714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.537740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.537900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.537926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.538116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.538142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.538300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.538333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.538494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.538519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.538654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.538680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.538841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.538868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.539034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.539065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.539198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.539224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.539368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.539394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.539523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.539550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.539716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.539743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.539901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.539926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.540076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.540103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.540236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.540262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.540425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.540452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.540599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-07-26 18:33:49.540625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-07-26 18:33:49.540768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.540793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.540963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.540989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.541137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.541168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.541309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.541335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.541474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.541501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.541667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.541698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.541828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.541854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.541996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.542022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.542187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.542213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.542359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.542385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.542515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.542541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.542677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.542702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.542830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.542855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.543021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.543047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.543179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.543204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.543329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.543355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.543517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.543543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.543696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.543722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.543851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.543877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.544008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.544034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.544179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.544205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.544357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.544383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.544518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.544543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.544672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.544697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.544859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.544886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.545033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.545066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.545203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.545230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.545367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.545393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.545556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.545581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.545714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.545739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.545904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.545930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.546091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.546118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.546263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.546288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-07-26 18:33:49.546426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-07-26 18:33:49.546452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.546614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.546640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.546768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.546793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.546927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.546953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.547131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.547157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.547294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.547321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.547449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.547475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.547650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.547675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.547826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.547851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.547984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.548010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.548179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.548206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.548335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.548361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.548493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.548523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.548686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.548711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.548853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.548878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.549006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.549032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.549197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.549223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.549349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.549375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.549509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.549536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.549681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.549706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.549837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.549863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.550048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.550082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.550209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.550234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.550396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.550421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.550560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.550586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.550776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.550802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.550960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.550987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.551126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.551152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.551340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.551366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.551500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.551526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.551681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.551707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.551865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.551891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.552050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.552082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.552228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-07-26 18:33:49.552254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-07-26 18:33:49.552438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.552463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.552620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.552645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.552783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.552809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.552981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.553007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.553170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.553196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.553335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.553362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.553551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.553578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.553712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.553739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.553910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.553936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.554082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.554108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.554244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.554270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.554416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.554442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.554603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.554628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.554758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.554783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.554923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.554949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.555095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.702 [2024-07-26 18:33:49.555123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.555276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.555304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:23.702 [2024-07-26 18:33:49.555457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.555488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.555650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.702 [2024-07-26 18:33:49.555676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.555808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.555833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:23.702 [2024-07-26 18:33:49.555976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.556004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.556150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.556176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.556364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.556389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.556549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.556575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.556708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.556735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.556899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.556924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.557057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.557089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.557227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.557252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.557383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.557409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.557539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.557565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.557695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.557721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.557860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-07-26 18:33:49.557886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-07-26 18:33:49.558049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.558079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.558217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.558242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.558402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.558427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.558580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.558605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.558741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.558766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.558890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.558915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.559045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.559074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.559237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.559262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.559393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.559419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.559543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.559569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.559719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.559744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.559911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.559950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.560112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.560142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.560317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.560343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.560513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.560547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.560690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.560717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.560885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.560911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.561098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.561124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.561257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.561284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.561420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.561446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.561589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.561615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.561815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.561841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.562106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.562133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.562292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.562318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.562566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.562597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.562733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.562760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.562928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.562954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.563116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.563143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.563314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.563340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.563501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.563527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.563688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.563713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.563840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.563866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-07-26 18:33:49.564037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-07-26 18:33:49.564068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.564211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.564237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.564400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.564426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.564588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.564613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.564742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.564767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.564933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.564958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.565106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.565132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.565295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.565321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.565458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.565484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.565678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.565705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.565845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.565872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.566032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.566066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.566214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.566240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.566378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.566405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.566543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.566569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.566726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.566752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.566920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.566946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.567085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.567112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.567263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.567288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.567452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.567492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.567650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.567677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.567830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.567856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.567989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.568015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.568163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.568189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.568328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.568353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.568522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.568548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.568695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.568721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.568858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.568885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-07-26 18:33:49.569055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.704 [2024-07-26 18:33:49.569086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.569233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.569259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.569391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.569417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.569594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.569620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.569775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.569807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.569970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.569996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.570137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.570163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.570309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.570334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.570467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.570493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.570653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.570678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.570816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.570842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.570971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.570996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.571161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.571187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.571315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.571341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.571501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.571527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.571672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.571699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.571838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.571864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.571990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.572016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.572183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.572209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.572347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.572373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.572509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.572536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.572667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.572692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.572854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.572879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.573045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.573075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.573243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.573268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.573428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.573453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.573614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.573639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.573771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.573796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.573929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.573955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.574132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.574158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.574296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.574323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.574474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.574515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.574692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.705 [2024-07-26 18:33:49.574721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.705 qpair failed and we were unable to recover it. 00:33:23.705 [2024-07-26 18:33:49.574855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.574881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.575026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.575052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.575209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.575236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.575372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.575398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.575575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.575602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.575738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.575764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa0000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.575911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.575938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.576107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.576133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.576274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.576299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.576438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.576464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.576626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.576652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.576816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.576846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.577007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.577032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.577185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.577211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.577351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.577377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.577543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.577570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.577749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.577774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.577936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.577961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.578103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.578129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.578272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.578298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.578472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.578498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.578668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.578694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.578821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.578846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.578986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.579011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.579271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.579297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.579476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.579502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.579646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.579671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.579824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.579849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.579998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.580024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.580171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.580204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.580342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.580369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.580501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.706 [2024-07-26 18:33:49.580527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.706 qpair failed and we were unable to recover it. 00:33:23.706 [2024-07-26 18:33:49.580681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.580706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.580866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.580892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.581049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.581080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.581244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.581271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.581431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.581456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.581608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.581633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.581771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.581796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.581988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.582013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.582161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.582190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.582323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.582348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.582481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.582506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.582667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.582692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 Malloc0 00:33:23.707 [2024-07-26 18:33:49.582830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.582857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.583021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.583048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.707 [2024-07-26 18:33:49.583246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.583272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:23.707 [2024-07-26 18:33:49.583403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.583429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.707 [2024-07-26 18:33:49.583563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.583588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:23.707 [2024-07-26 18:33:49.583724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.583749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.583900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.583926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.584064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.584090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.584251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.584277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.584410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.584436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.584569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.584596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.584786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.584811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.584948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.584973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.585136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.585162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.585296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.585321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.585457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.585482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.585638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.585663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.585795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.585821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.585973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.707 [2024-07-26 18:33:49.585998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.707 qpair failed and we were unable to recover it. 00:33:23.707 [2024-07-26 18:33:49.586141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.586167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.586331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.586356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.586454] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.708 [2024-07-26 18:33:49.586519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.586543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.586702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.586727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.586859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.586884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.587024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.587049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.587208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.587233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.587379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.587405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.587586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.587612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.587760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.587786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.587972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.587997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.588150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.588176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.588335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.588360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.588493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.588519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.588651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.588676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.588803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.588828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.588983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.589008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.589169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.589195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.589323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.589349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.589508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.589533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.589668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.589693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.589837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.589863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.590047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.590077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.590218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.590243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.590410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.590435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.590599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.590624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.590778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.590807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.590982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.591008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.591212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.591237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.591383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.591408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.591593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.591618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.591757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.591783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.708 [2024-07-26 18:33:49.591948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.708 [2024-07-26 18:33:49.591973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.708 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.592123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.592149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.592284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.592310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.592448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.592473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.592627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.592652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.592789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.592816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.592961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.592986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.593121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.593146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.593297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.593323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.593456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.593480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.593612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.593639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.593802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.593828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.593962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.593987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.594127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.594153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.594284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.594310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.594481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.594507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.709 [2024-07-26 18:33:49.594674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.594699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:23.709 [2024-07-26 18:33:49.594831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.594857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.709 [2024-07-26 18:33:49.595026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.595052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.595212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.595242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.595378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.595403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.595533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.595558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.595712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.595737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.595878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.595903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.596029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.596054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.596221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.596246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.596405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.596431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.596565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.596590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.596720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.596745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.596885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.596911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.597055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.597086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.597238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.597263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.709 qpair failed and we were unable to recover it. 00:33:23.709 [2024-07-26 18:33:49.597401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.709 [2024-07-26 18:33:49.597427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.597589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.597614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.597745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.597770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.597904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.597930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.598076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.598110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.598272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.598297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.598463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.598489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.598638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.598663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.598799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.598825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.598975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.598999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.599155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.599181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.599317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.599343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.599479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.599504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.599635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.599661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.599797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.599822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.599959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.599984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.600174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.600200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.600330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.600355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.600518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.600543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.600691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.600716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.600859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.600884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.601030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.601056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.601227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.601252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.710 qpair failed and we were unable to recover it. 00:33:23.710 [2024-07-26 18:33:49.601390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.710 [2024-07-26 18:33:49.601416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.601589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.601614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.601775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.601800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.601957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.601982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.602115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.602145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.602336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.602361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.602488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.602513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.602644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.602670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.711 [2024-07-26 18:33:49.602802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:23.711 [2024-07-26 18:33:49.602827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.711 [2024-07-26 18:33:49.602964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.602990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:23.711 [2024-07-26 18:33:49.603153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.603179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.603317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.603343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.603481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.603507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.603643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.603668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.603830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.603855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.603980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.604005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.604161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.604186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.604334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.604360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.604531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.604556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.604723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.604748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.604910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.604936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.605101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.605127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.605267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.605292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.605440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.605465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.605598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.605623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.605772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.605797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.605930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.605956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.606085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.606111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.606270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.606297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa8000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.606568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.606608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.606759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.711 [2024-07-26 18:33:49.606788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.711 qpair failed and we were unable to recover it. 00:33:23.711 [2024-07-26 18:33:49.606950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.606978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.607126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.607154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.607350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.607377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.607517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.607543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.607700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.607726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.607891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.607918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.608089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.608116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.608279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.608306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.608471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.608496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.608742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.608767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.608930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.608956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.609120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.609152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.609289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.609315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.609479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.609505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.609746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.609772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.609930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.609956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.610123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.610150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.610304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.610329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.610468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.610494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.610634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.610660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.712 [2024-07-26 18:33:49.610794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.610820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.712 [2024-07-26 18:33:49.610994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.611021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:23.712 [2024-07-26 18:33:49.611156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.611182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.611323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.611349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.611524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.611551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.611714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.611741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.611871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.611897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.612064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.612091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.612249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.612275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.612410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.612437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.612578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.612605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.612758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.712 [2024-07-26 18:33:49.612784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.712 qpair failed and we were unable to recover it. 00:33:23.712 [2024-07-26 18:33:49.612921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.713 [2024-07-26 18:33:49.612946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.613094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.713 [2024-07-26 18:33:49.613121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.613266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.713 [2024-07-26 18:33:49.613292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.613536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.713 [2024-07-26 18:33:49.613562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.613732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.713 [2024-07-26 18:33:49.613758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.613892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.713 [2024-07-26 18:33:49.613919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.614091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.713 [2024-07-26 18:33:49.614119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.614281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.713 [2024-07-26 18:33:49.614308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.614436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.713 [2024-07-26 18:33:49.614463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.614632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.713 [2024-07-26 18:33:49.614659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcf98000b90 with addr=10.0.0.2, port=4420 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.614682] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.713 [2024-07-26 18:33:49.617196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.713 [2024-07-26 18:33:49.617358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.713 [2024-07-26 18:33:49.617386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.713 [2024-07-26 18:33:49.617404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.713 [2024-07-26 18:33:49.617418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.713 [2024-07-26 18:33:49.617468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.713 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:23.713 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.713 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:23.713 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.713 18:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1620192 00:33:23.713 [2024-07-26 18:33:49.627036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.713 [2024-07-26 18:33:49.627197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.713 [2024-07-26 18:33:49.627225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.713 [2024-07-26 18:33:49.627246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.713 [2024-07-26 18:33:49.627260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.713 [2024-07-26 18:33:49.627291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.637102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.713 [2024-07-26 18:33:49.637237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.713 [2024-07-26 18:33:49.637264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.713 [2024-07-26 18:33:49.637279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.713 [2024-07-26 18:33:49.637293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.713 [2024-07-26 18:33:49.637324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.647031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.713 [2024-07-26 18:33:49.647194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.713 [2024-07-26 18:33:49.647221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.713 [2024-07-26 18:33:49.647236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.713 [2024-07-26 18:33:49.647250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.713 [2024-07-26 18:33:49.647280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.657098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.713 [2024-07-26 18:33:49.657248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.713 [2024-07-26 18:33:49.657274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.713 [2024-07-26 18:33:49.657290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.713 [2024-07-26 18:33:49.657304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.713 [2024-07-26 18:33:49.657334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.667100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.713 [2024-07-26 18:33:49.667232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.713 [2024-07-26 18:33:49.667259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.713 [2024-07-26 18:33:49.667273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.713 [2024-07-26 18:33:49.667287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.713 [2024-07-26 18:33:49.667317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.677139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.713 [2024-07-26 18:33:49.677296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.713 [2024-07-26 18:33:49.677324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.713 [2024-07-26 18:33:49.677339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.713 [2024-07-26 18:33:49.677352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.713 [2024-07-26 18:33:49.677383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.713 qpair failed and we were unable to recover it. 00:33:23.713 [2024-07-26 18:33:49.687208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.713 [2024-07-26 18:33:49.687376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.713 [2024-07-26 18:33:49.687405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.714 [2024-07-26 18:33:49.687420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.714 [2024-07-26 18:33:49.687434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.714 [2024-07-26 18:33:49.687465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.714 qpair failed and we were unable to recover it. 00:33:23.714 [2024-07-26 18:33:49.697131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.714 [2024-07-26 18:33:49.697270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.714 [2024-07-26 18:33:49.697297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.714 [2024-07-26 18:33:49.697312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.714 [2024-07-26 18:33:49.697325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.714 [2024-07-26 18:33:49.697356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.714 qpair failed and we were unable to recover it. 00:33:23.714 [2024-07-26 18:33:49.707146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.714 [2024-07-26 18:33:49.707280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.714 [2024-07-26 18:33:49.707306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.714 [2024-07-26 18:33:49.707321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.714 [2024-07-26 18:33:49.707334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.714 [2024-07-26 18:33:49.707365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.714 qpair failed and we were unable to recover it. 00:33:23.714 [2024-07-26 18:33:49.717192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.714 [2024-07-26 18:33:49.717335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.714 [2024-07-26 18:33:49.717366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.714 [2024-07-26 18:33:49.717382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.714 [2024-07-26 18:33:49.717396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.714 [2024-07-26 18:33:49.717426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.714 qpair failed and we were unable to recover it. 00:33:23.714 [2024-07-26 18:33:49.727221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.714 [2024-07-26 18:33:49.727360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.714 [2024-07-26 18:33:49.727386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.714 [2024-07-26 18:33:49.727400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.714 [2024-07-26 18:33:49.727414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.714 [2024-07-26 18:33:49.727446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.714 qpair failed and we were unable to recover it. 00:33:23.714 [2024-07-26 18:33:49.737271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.714 [2024-07-26 18:33:49.737402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.714 [2024-07-26 18:33:49.737429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.714 [2024-07-26 18:33:49.737443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.714 [2024-07-26 18:33:49.737456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.714 [2024-07-26 18:33:49.737486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.714 qpair failed and we were unable to recover it. 00:33:23.714 [2024-07-26 18:33:49.747317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.714 [2024-07-26 18:33:49.747458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.714 [2024-07-26 18:33:49.747485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.714 [2024-07-26 18:33:49.747500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.714 [2024-07-26 18:33:49.747514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.714 [2024-07-26 18:33:49.747555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.714 qpair failed and we were unable to recover it. 00:33:23.714 [2024-07-26 18:33:49.757367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.714 [2024-07-26 18:33:49.757505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.714 [2024-07-26 18:33:49.757531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.714 [2024-07-26 18:33:49.757547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.714 [2024-07-26 18:33:49.757561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.714 [2024-07-26 18:33:49.757609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.714 qpair failed and we were unable to recover it. 00:33:23.714 [2024-07-26 18:33:49.767365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.714 [2024-07-26 18:33:49.767506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.714 [2024-07-26 18:33:49.767532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.714 [2024-07-26 18:33:49.767547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.714 [2024-07-26 18:33:49.767561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.714 [2024-07-26 18:33:49.767603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.714 qpair failed and we were unable to recover it. 00:33:23.714 [2024-07-26 18:33:49.777408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.714 [2024-07-26 18:33:49.777556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.714 [2024-07-26 18:33:49.777582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.714 [2024-07-26 18:33:49.777597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.714 [2024-07-26 18:33:49.777611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.714 [2024-07-26 18:33:49.777641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.714 qpair failed and we were unable to recover it. 00:33:23.714 [2024-07-26 18:33:49.787446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.714 [2024-07-26 18:33:49.787597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.714 [2024-07-26 18:33:49.787624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.714 [2024-07-26 18:33:49.787639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.714 [2024-07-26 18:33:49.787654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.714 [2024-07-26 18:33:49.787695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.714 qpair failed and we were unable to recover it. 00:33:23.714 [2024-07-26 18:33:49.797445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.714 [2024-07-26 18:33:49.797589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.714 [2024-07-26 18:33:49.797614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.714 [2024-07-26 18:33:49.797629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.714 [2024-07-26 18:33:49.797642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.714 [2024-07-26 18:33:49.797675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.714 qpair failed and we were unable to recover it. 00:33:23.714 [2024-07-26 18:33:49.807443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.714 [2024-07-26 18:33:49.807586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.714 [2024-07-26 18:33:49.807617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.715 [2024-07-26 18:33:49.807633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.715 [2024-07-26 18:33:49.807647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.715 [2024-07-26 18:33:49.807677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.715 qpair failed and we were unable to recover it. 00:33:23.715 [2024-07-26 18:33:49.817530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.715 [2024-07-26 18:33:49.817669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.715 [2024-07-26 18:33:49.817695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.715 [2024-07-26 18:33:49.817710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.715 [2024-07-26 18:33:49.817723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.715 [2024-07-26 18:33:49.817754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.715 qpair failed and we were unable to recover it. 00:33:23.975 [2024-07-26 18:33:49.827514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.975 [2024-07-26 18:33:49.827654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.975 [2024-07-26 18:33:49.827679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.975 [2024-07-26 18:33:49.827694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.975 [2024-07-26 18:33:49.827708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.976 [2024-07-26 18:33:49.827741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.976 qpair failed and we were unable to recover it. 00:33:23.976 [2024-07-26 18:33:49.837571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.976 [2024-07-26 18:33:49.837712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.976 [2024-07-26 18:33:49.837737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.976 [2024-07-26 18:33:49.837752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.976 [2024-07-26 18:33:49.837766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.976 [2024-07-26 18:33:49.837797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.976 qpair failed and we were unable to recover it. 00:33:23.976 [2024-07-26 18:33:49.847590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.976 [2024-07-26 18:33:49.847729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.976 [2024-07-26 18:33:49.847755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.976 [2024-07-26 18:33:49.847770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.976 [2024-07-26 18:33:49.847783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.976 [2024-07-26 18:33:49.847819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.976 qpair failed and we were unable to recover it. 00:33:23.976 [2024-07-26 18:33:49.857598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.976 [2024-07-26 18:33:49.857745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.976 [2024-07-26 18:33:49.857771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.976 [2024-07-26 18:33:49.857786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.976 [2024-07-26 18:33:49.857801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.976 [2024-07-26 18:33:49.857832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.976 qpair failed and we were unable to recover it. 00:33:23.976 [2024-07-26 18:33:49.867700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.976 [2024-07-26 18:33:49.867845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.976 [2024-07-26 18:33:49.867871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.976 [2024-07-26 18:33:49.867886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.976 [2024-07-26 18:33:49.867901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.976 [2024-07-26 18:33:49.867932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.976 qpair failed and we were unable to recover it. 00:33:23.976 [2024-07-26 18:33:49.877706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.976 [2024-07-26 18:33:49.877882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.976 [2024-07-26 18:33:49.877909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.976 [2024-07-26 18:33:49.877923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.976 [2024-07-26 18:33:49.877936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.976 [2024-07-26 18:33:49.877966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.976 qpair failed and we were unable to recover it. 00:33:23.976 [2024-07-26 18:33:49.887682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.976 [2024-07-26 18:33:49.887820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.976 [2024-07-26 18:33:49.887846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.976 [2024-07-26 18:33:49.887861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.976 [2024-07-26 18:33:49.887875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.976 [2024-07-26 18:33:49.887905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.976 qpair failed and we were unable to recover it. 00:33:23.976 [2024-07-26 18:33:49.897780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.976 [2024-07-26 18:33:49.897946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.976 [2024-07-26 18:33:49.897975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.976 [2024-07-26 18:33:49.897990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.976 [2024-07-26 18:33:49.898004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.976 [2024-07-26 18:33:49.898035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.976 qpair failed and we were unable to recover it. 00:33:23.976 [2024-07-26 18:33:49.907756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.976 [2024-07-26 18:33:49.907885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.976 [2024-07-26 18:33:49.907911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.976 [2024-07-26 18:33:49.907926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.976 [2024-07-26 18:33:49.907939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.976 [2024-07-26 18:33:49.907970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.976 qpair failed and we were unable to recover it. 00:33:23.976 [2024-07-26 18:33:49.917800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.976 [2024-07-26 18:33:49.917934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.976 [2024-07-26 18:33:49.917960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.976 [2024-07-26 18:33:49.917974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.976 [2024-07-26 18:33:49.917988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.976 [2024-07-26 18:33:49.918018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.976 qpair failed and we were unable to recover it. 00:33:23.976 [2024-07-26 18:33:49.927817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.976 [2024-07-26 18:33:49.927962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.976 [2024-07-26 18:33:49.927989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.976 [2024-07-26 18:33:49.928009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.976 [2024-07-26 18:33:49.928023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.976 [2024-07-26 18:33:49.928055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.976 qpair failed and we were unable to recover it. 00:33:23.976 [2024-07-26 18:33:49.937801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.976 [2024-07-26 18:33:49.937939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.976 [2024-07-26 18:33:49.937965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.976 [2024-07-26 18:33:49.937979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.976 [2024-07-26 18:33:49.937998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.976 [2024-07-26 18:33:49.938029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.976 qpair failed and we were unable to recover it. 00:33:23.976 [2024-07-26 18:33:49.947878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.976 [2024-07-26 18:33:49.948016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.976 [2024-07-26 18:33:49.948042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.976 [2024-07-26 18:33:49.948057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.976 [2024-07-26 18:33:49.948082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.976 [2024-07-26 18:33:49.948113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.976 qpair failed and we were unable to recover it. 00:33:23.977 [2024-07-26 18:33:49.957875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.977 [2024-07-26 18:33:49.958007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.977 [2024-07-26 18:33:49.958032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.977 [2024-07-26 18:33:49.958047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.977 [2024-07-26 18:33:49.958068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.977 [2024-07-26 18:33:49.958103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.977 qpair failed and we were unable to recover it. 00:33:23.977 [2024-07-26 18:33:49.967915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.977 [2024-07-26 18:33:49.968050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.977 [2024-07-26 18:33:49.968085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.977 [2024-07-26 18:33:49.968101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.977 [2024-07-26 18:33:49.968114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.977 [2024-07-26 18:33:49.968159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.977 qpair failed and we were unable to recover it. 00:33:23.977 [2024-07-26 18:33:49.977936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.977 [2024-07-26 18:33:49.978075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.977 [2024-07-26 18:33:49.978109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.977 [2024-07-26 18:33:49.978124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.977 [2024-07-26 18:33:49.978138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.977 [2024-07-26 18:33:49.978169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.977 qpair failed and we were unable to recover it. 00:33:23.977 [2024-07-26 18:33:49.987961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.977 [2024-07-26 18:33:49.988113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.977 [2024-07-26 18:33:49.988140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.977 [2024-07-26 18:33:49.988155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.977 [2024-07-26 18:33:49.988169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.977 [2024-07-26 18:33:49.988199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.977 qpair failed and we were unable to recover it. 00:33:23.977 [2024-07-26 18:33:49.998017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.977 [2024-07-26 18:33:49.998165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.977 [2024-07-26 18:33:49.998192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.977 [2024-07-26 18:33:49.998207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.977 [2024-07-26 18:33:49.998221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.977 [2024-07-26 18:33:49.998263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.977 qpair failed and we were unable to recover it. 00:33:23.977 [2024-07-26 18:33:50.008042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.977 [2024-07-26 18:33:50.008231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.977 [2024-07-26 18:33:50.008260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.977 [2024-07-26 18:33:50.008277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.977 [2024-07-26 18:33:50.008292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.977 [2024-07-26 18:33:50.008324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.977 qpair failed and we were unable to recover it. 00:33:23.977 [2024-07-26 18:33:50.018158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.977 [2024-07-26 18:33:50.018304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.977 [2024-07-26 18:33:50.018332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.977 [2024-07-26 18:33:50.018348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.977 [2024-07-26 18:33:50.018362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.977 [2024-07-26 18:33:50.018406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.977 qpair failed and we were unable to recover it. 00:33:23.977 [2024-07-26 18:33:50.028093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.977 [2024-07-26 18:33:50.028230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.977 [2024-07-26 18:33:50.028256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.977 [2024-07-26 18:33:50.028278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.977 [2024-07-26 18:33:50.028291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.977 [2024-07-26 18:33:50.028323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.977 qpair failed and we were unable to recover it. 00:33:23.977 [2024-07-26 18:33:50.038171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.977 [2024-07-26 18:33:50.038318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.977 [2024-07-26 18:33:50.038357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.977 [2024-07-26 18:33:50.038373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.977 [2024-07-26 18:33:50.038386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.977 [2024-07-26 18:33:50.038419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.977 qpair failed and we were unable to recover it. 00:33:23.977 [2024-07-26 18:33:50.048202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.977 [2024-07-26 18:33:50.048357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.977 [2024-07-26 18:33:50.048384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.977 [2024-07-26 18:33:50.048402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.977 [2024-07-26 18:33:50.048416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.977 [2024-07-26 18:33:50.048447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.977 qpair failed and we were unable to recover it. 00:33:23.977 [2024-07-26 18:33:50.058202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.977 [2024-07-26 18:33:50.058346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.977 [2024-07-26 18:33:50.058374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.977 [2024-07-26 18:33:50.058393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.977 [2024-07-26 18:33:50.058408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.977 [2024-07-26 18:33:50.058449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.977 qpair failed and we were unable to recover it. 00:33:23.978 [2024-07-26 18:33:50.068220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.978 [2024-07-26 18:33:50.068352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.978 [2024-07-26 18:33:50.068383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.978 [2024-07-26 18:33:50.068398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.978 [2024-07-26 18:33:50.068412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.978 [2024-07-26 18:33:50.068443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.978 qpair failed and we were unable to recover it. 00:33:23.978 [2024-07-26 18:33:50.078226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.978 [2024-07-26 18:33:50.078407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.978 [2024-07-26 18:33:50.078433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.978 [2024-07-26 18:33:50.078448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.978 [2024-07-26 18:33:50.078461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.978 [2024-07-26 18:33:50.078492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.978 qpair failed and we were unable to recover it. 00:33:23.978 [2024-07-26 18:33:50.088279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.978 [2024-07-26 18:33:50.088454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.978 [2024-07-26 18:33:50.088479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.978 [2024-07-26 18:33:50.088495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.978 [2024-07-26 18:33:50.088508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.978 [2024-07-26 18:33:50.088538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.978 qpair failed and we were unable to recover it. 00:33:23.978 [2024-07-26 18:33:50.098288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.978 [2024-07-26 18:33:50.098446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.978 [2024-07-26 18:33:50.098475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.978 [2024-07-26 18:33:50.098492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.978 [2024-07-26 18:33:50.098505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.978 [2024-07-26 18:33:50.098536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.978 qpair failed and we were unable to recover it. 00:33:23.978 [2024-07-26 18:33:50.108309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.978 [2024-07-26 18:33:50.108456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.978 [2024-07-26 18:33:50.108483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.978 [2024-07-26 18:33:50.108498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.978 [2024-07-26 18:33:50.108512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:23.978 [2024-07-26 18:33:50.108544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.978 qpair failed and we were unable to recover it. 00:33:23.978 [2024-07-26 18:33:50.118342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.238 [2024-07-26 18:33:50.118482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.238 [2024-07-26 18:33:50.118514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.238 [2024-07-26 18:33:50.118530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.238 [2024-07-26 18:33:50.118553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.238 [2024-07-26 18:33:50.118584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.238 qpair failed and we were unable to recover it. 00:33:24.238 [2024-07-26 18:33:50.128366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.238 [2024-07-26 18:33:50.128506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.238 [2024-07-26 18:33:50.128531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.238 [2024-07-26 18:33:50.128545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.238 [2024-07-26 18:33:50.128558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.238 [2024-07-26 18:33:50.128589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.238 qpair failed and we were unable to recover it. 00:33:24.238 [2024-07-26 18:33:50.138390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.238 [2024-07-26 18:33:50.138528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.238 [2024-07-26 18:33:50.138554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.238 [2024-07-26 18:33:50.138569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.238 [2024-07-26 18:33:50.138582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.238 [2024-07-26 18:33:50.138612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.238 qpair failed and we were unable to recover it. 00:33:24.238 [2024-07-26 18:33:50.148424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.238 [2024-07-26 18:33:50.148561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.238 [2024-07-26 18:33:50.148586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.238 [2024-07-26 18:33:50.148601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.238 [2024-07-26 18:33:50.148615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.238 [2024-07-26 18:33:50.148645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.238 qpair failed and we were unable to recover it. 00:33:24.238 [2024-07-26 18:33:50.158427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.238 [2024-07-26 18:33:50.158557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.238 [2024-07-26 18:33:50.158583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.238 [2024-07-26 18:33:50.158597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.238 [2024-07-26 18:33:50.158610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.238 [2024-07-26 18:33:50.158647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.238 qpair failed and we were unable to recover it. 00:33:24.238 [2024-07-26 18:33:50.168464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.238 [2024-07-26 18:33:50.168613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.238 [2024-07-26 18:33:50.168638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.238 [2024-07-26 18:33:50.168653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.239 [2024-07-26 18:33:50.168666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.239 [2024-07-26 18:33:50.168697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.239 qpair failed and we were unable to recover it. 00:33:24.239 [2024-07-26 18:33:50.178544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.239 [2024-07-26 18:33:50.178707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.239 [2024-07-26 18:33:50.178732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.239 [2024-07-26 18:33:50.178748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.239 [2024-07-26 18:33:50.178761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.239 [2024-07-26 18:33:50.178791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.239 qpair failed and we were unable to recover it. 00:33:24.239 [2024-07-26 18:33:50.188516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.239 [2024-07-26 18:33:50.188660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.239 [2024-07-26 18:33:50.188685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.239 [2024-07-26 18:33:50.188700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.239 [2024-07-26 18:33:50.188712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.239 [2024-07-26 18:33:50.188742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.239 qpair failed and we were unable to recover it. 00:33:24.239 [2024-07-26 18:33:50.198600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.239 [2024-07-26 18:33:50.198737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.239 [2024-07-26 18:33:50.198762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.239 [2024-07-26 18:33:50.198777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.239 [2024-07-26 18:33:50.198790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.239 [2024-07-26 18:33:50.198822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.239 qpair failed and we were unable to recover it. 00:33:24.239 [2024-07-26 18:33:50.208610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.239 [2024-07-26 18:33:50.208781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.239 [2024-07-26 18:33:50.208811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.239 [2024-07-26 18:33:50.208827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.239 [2024-07-26 18:33:50.208840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.239 [2024-07-26 18:33:50.208870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.239 qpair failed and we were unable to recover it. 00:33:24.239 [2024-07-26 18:33:50.218650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.239 [2024-07-26 18:33:50.218784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.239 [2024-07-26 18:33:50.218810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.239 [2024-07-26 18:33:50.218825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.239 [2024-07-26 18:33:50.218839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.239 [2024-07-26 18:33:50.218870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.239 qpair failed and we were unable to recover it. 00:33:24.239 [2024-07-26 18:33:50.228694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.239 [2024-07-26 18:33:50.228834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.239 [2024-07-26 18:33:50.228861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.239 [2024-07-26 18:33:50.228881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.239 [2024-07-26 18:33:50.228896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.239 [2024-07-26 18:33:50.228928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.239 qpair failed and we were unable to recover it. 00:33:24.239 [2024-07-26 18:33:50.238674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.239 [2024-07-26 18:33:50.238811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.239 [2024-07-26 18:33:50.238837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.239 [2024-07-26 18:33:50.238851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.239 [2024-07-26 18:33:50.238863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.239 [2024-07-26 18:33:50.238893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.239 qpair failed and we were unable to recover it. 00:33:24.239 [2024-07-26 18:33:50.248710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.239 [2024-07-26 18:33:50.248843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.239 [2024-07-26 18:33:50.248869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.239 [2024-07-26 18:33:50.248884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.239 [2024-07-26 18:33:50.248897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.239 [2024-07-26 18:33:50.248938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.239 qpair failed and we were unable to recover it. 00:33:24.239 [2024-07-26 18:33:50.258728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.239 [2024-07-26 18:33:50.258866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.239 [2024-07-26 18:33:50.258892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.239 [2024-07-26 18:33:50.258907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.239 [2024-07-26 18:33:50.258921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.239 [2024-07-26 18:33:50.258951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.239 qpair failed and we were unable to recover it. 00:33:24.239 [2024-07-26 18:33:50.268754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.239 [2024-07-26 18:33:50.268886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.239 [2024-07-26 18:33:50.268913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.239 [2024-07-26 18:33:50.268928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.239 [2024-07-26 18:33:50.268942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.239 [2024-07-26 18:33:50.268984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.239 qpair failed and we were unable to recover it. 00:33:24.239 [2024-07-26 18:33:50.278779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.239 [2024-07-26 18:33:50.278913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.239 [2024-07-26 18:33:50.278939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.239 [2024-07-26 18:33:50.278954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.239 [2024-07-26 18:33:50.278968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.239 [2024-07-26 18:33:50.278998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.239 qpair failed and we were unable to recover it. 00:33:24.239 [2024-07-26 18:33:50.288843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.239 [2024-07-26 18:33:50.288983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.239 [2024-07-26 18:33:50.289008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.239 [2024-07-26 18:33:50.289024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.240 [2024-07-26 18:33:50.289037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.240 [2024-07-26 18:33:50.289073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.240 qpair failed and we were unable to recover it. 00:33:24.240 [2024-07-26 18:33:50.298869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.240 [2024-07-26 18:33:50.299006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.240 [2024-07-26 18:33:50.299036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.240 [2024-07-26 18:33:50.299052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.240 [2024-07-26 18:33:50.299075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.240 [2024-07-26 18:33:50.299107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.240 qpair failed and we were unable to recover it. 00:33:24.240 [2024-07-26 18:33:50.308896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.240 [2024-07-26 18:33:50.309081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.240 [2024-07-26 18:33:50.309107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.240 [2024-07-26 18:33:50.309122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.240 [2024-07-26 18:33:50.309136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.240 [2024-07-26 18:33:50.309166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.240 qpair failed and we were unable to recover it. 00:33:24.240 [2024-07-26 18:33:50.318982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.240 [2024-07-26 18:33:50.319133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.240 [2024-07-26 18:33:50.319159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.240 [2024-07-26 18:33:50.319174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.240 [2024-07-26 18:33:50.319188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.240 [2024-07-26 18:33:50.319231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.240 qpair failed and we were unable to recover it. 00:33:24.240 [2024-07-26 18:33:50.328932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.240 [2024-07-26 18:33:50.329090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.240 [2024-07-26 18:33:50.329116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.240 [2024-07-26 18:33:50.329131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.240 [2024-07-26 18:33:50.329145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.240 [2024-07-26 18:33:50.329176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.240 qpair failed and we were unable to recover it. 00:33:24.240 [2024-07-26 18:33:50.339033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.240 [2024-07-26 18:33:50.339184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.240 [2024-07-26 18:33:50.339211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.240 [2024-07-26 18:33:50.339226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.240 [2024-07-26 18:33:50.339245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.240 [2024-07-26 18:33:50.339289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.240 qpair failed and we were unable to recover it. 00:33:24.240 [2024-07-26 18:33:50.349084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.240 [2024-07-26 18:33:50.349219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.240 [2024-07-26 18:33:50.349245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.240 [2024-07-26 18:33:50.349260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.240 [2024-07-26 18:33:50.349274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.240 [2024-07-26 18:33:50.349307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.240 qpair failed and we were unable to recover it. 00:33:24.240 [2024-07-26 18:33:50.359003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.240 [2024-07-26 18:33:50.359144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.240 [2024-07-26 18:33:50.359171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.240 [2024-07-26 18:33:50.359185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.240 [2024-07-26 18:33:50.359199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.240 [2024-07-26 18:33:50.359231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.240 qpair failed and we were unable to recover it. 00:33:24.240 [2024-07-26 18:33:50.369072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.240 [2024-07-26 18:33:50.369217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.240 [2024-07-26 18:33:50.369245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.240 [2024-07-26 18:33:50.369264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.240 [2024-07-26 18:33:50.369278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.240 [2024-07-26 18:33:50.369310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.240 qpair failed and we were unable to recover it. 00:33:24.240 [2024-07-26 18:33:50.379081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.240 [2024-07-26 18:33:50.379234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.240 [2024-07-26 18:33:50.379261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.240 [2024-07-26 18:33:50.379276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.240 [2024-07-26 18:33:50.379290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.240 [2024-07-26 18:33:50.379320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.240 qpair failed and we were unable to recover it. 00:33:24.502 [2024-07-26 18:33:50.389109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.502 [2024-07-26 18:33:50.389290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.502 [2024-07-26 18:33:50.389316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.502 [2024-07-26 18:33:50.389331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.502 [2024-07-26 18:33:50.389344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.502 [2024-07-26 18:33:50.389375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.502 qpair failed and we were unable to recover it. 00:33:24.502 [2024-07-26 18:33:50.399127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.502 [2024-07-26 18:33:50.399257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.502 [2024-07-26 18:33:50.399283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.502 [2024-07-26 18:33:50.399298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.502 [2024-07-26 18:33:50.399311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.502 [2024-07-26 18:33:50.399341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.502 qpair failed and we were unable to recover it. 00:33:24.502 [2024-07-26 18:33:50.409287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.502 [2024-07-26 18:33:50.409431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.502 [2024-07-26 18:33:50.409457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.502 [2024-07-26 18:33:50.409471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.502 [2024-07-26 18:33:50.409485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.502 [2024-07-26 18:33:50.409517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.502 qpair failed and we were unable to recover it. 00:33:24.502 [2024-07-26 18:33:50.419286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.502 [2024-07-26 18:33:50.419427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.502 [2024-07-26 18:33:50.419452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.502 [2024-07-26 18:33:50.419468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.502 [2024-07-26 18:33:50.419482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.502 [2024-07-26 18:33:50.419513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.502 qpair failed and we were unable to recover it. 00:33:24.502 [2024-07-26 18:33:50.429243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.502 [2024-07-26 18:33:50.429377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.502 [2024-07-26 18:33:50.429402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.502 [2024-07-26 18:33:50.429422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.502 [2024-07-26 18:33:50.429437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.502 [2024-07-26 18:33:50.429468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.502 qpair failed and we were unable to recover it. 00:33:24.502 [2024-07-26 18:33:50.439320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.502 [2024-07-26 18:33:50.439465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.502 [2024-07-26 18:33:50.439491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.502 [2024-07-26 18:33:50.439505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.502 [2024-07-26 18:33:50.439519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.502 [2024-07-26 18:33:50.439549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.502 qpair failed and we were unable to recover it. 00:33:24.502 [2024-07-26 18:33:50.449300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.503 [2024-07-26 18:33:50.449434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.503 [2024-07-26 18:33:50.449460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.503 [2024-07-26 18:33:50.449474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.503 [2024-07-26 18:33:50.449488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.503 [2024-07-26 18:33:50.449519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.503 qpair failed and we were unable to recover it. 00:33:24.503 [2024-07-26 18:33:50.459334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.503 [2024-07-26 18:33:50.459488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.503 [2024-07-26 18:33:50.459514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.503 [2024-07-26 18:33:50.459528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.503 [2024-07-26 18:33:50.459542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.503 [2024-07-26 18:33:50.459573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.503 qpair failed and we were unable to recover it. 00:33:24.503 [2024-07-26 18:33:50.469314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.503 [2024-07-26 18:33:50.469474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.503 [2024-07-26 18:33:50.469500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.503 [2024-07-26 18:33:50.469515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.503 [2024-07-26 18:33:50.469529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.503 [2024-07-26 18:33:50.469560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.503 qpair failed and we were unable to recover it. 00:33:24.503 [2024-07-26 18:33:50.479366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.503 [2024-07-26 18:33:50.479497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.503 [2024-07-26 18:33:50.479522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.503 [2024-07-26 18:33:50.479537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.503 [2024-07-26 18:33:50.479551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.503 [2024-07-26 18:33:50.479583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.503 qpair failed and we were unable to recover it. 00:33:24.503 [2024-07-26 18:33:50.489429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.503 [2024-07-26 18:33:50.489572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.503 [2024-07-26 18:33:50.489599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.503 [2024-07-26 18:33:50.489623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.503 [2024-07-26 18:33:50.489639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.503 [2024-07-26 18:33:50.489685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.503 qpair failed and we were unable to recover it. 00:33:24.503 [2024-07-26 18:33:50.499417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.503 [2024-07-26 18:33:50.499575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.503 [2024-07-26 18:33:50.499601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.503 [2024-07-26 18:33:50.499616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.503 [2024-07-26 18:33:50.499630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.503 [2024-07-26 18:33:50.499661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.503 qpair failed and we were unable to recover it. 00:33:24.503 [2024-07-26 18:33:50.509469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.503 [2024-07-26 18:33:50.509601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.503 [2024-07-26 18:33:50.509627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.503 [2024-07-26 18:33:50.509643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.503 [2024-07-26 18:33:50.509656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.503 [2024-07-26 18:33:50.509686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.503 qpair failed and we were unable to recover it. 00:33:24.503 [2024-07-26 18:33:50.519512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.503 [2024-07-26 18:33:50.519679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.503 [2024-07-26 18:33:50.519705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.503 [2024-07-26 18:33:50.519727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.503 [2024-07-26 18:33:50.519742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.503 [2024-07-26 18:33:50.519772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.503 qpair failed and we were unable to recover it. 00:33:24.503 [2024-07-26 18:33:50.529606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.503 [2024-07-26 18:33:50.529776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.503 [2024-07-26 18:33:50.529802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.503 [2024-07-26 18:33:50.529817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.503 [2024-07-26 18:33:50.529830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.503 [2024-07-26 18:33:50.529873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.503 qpair failed and we were unable to recover it. 00:33:24.503 [2024-07-26 18:33:50.539539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.503 [2024-07-26 18:33:50.539683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.503 [2024-07-26 18:33:50.539709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.503 [2024-07-26 18:33:50.539723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.503 [2024-07-26 18:33:50.539737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.503 [2024-07-26 18:33:50.539767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.504 qpair failed and we were unable to recover it. 00:33:24.504 [2024-07-26 18:33:50.549546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.504 [2024-07-26 18:33:50.549682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.504 [2024-07-26 18:33:50.549708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.504 [2024-07-26 18:33:50.549722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.504 [2024-07-26 18:33:50.549736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.504 [2024-07-26 18:33:50.549767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.504 qpair failed and we were unable to recover it. 00:33:24.504 [2024-07-26 18:33:50.559578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.504 [2024-07-26 18:33:50.559711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.504 [2024-07-26 18:33:50.559737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.504 [2024-07-26 18:33:50.559752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.504 [2024-07-26 18:33:50.559765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.504 [2024-07-26 18:33:50.559795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.504 qpair failed and we were unable to recover it. 00:33:24.504 [2024-07-26 18:33:50.569662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.504 [2024-07-26 18:33:50.569798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.504 [2024-07-26 18:33:50.569824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.504 [2024-07-26 18:33:50.569839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.504 [2024-07-26 18:33:50.569852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.504 [2024-07-26 18:33:50.569882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.504 qpair failed and we were unable to recover it. 00:33:24.504 [2024-07-26 18:33:50.579732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.504 [2024-07-26 18:33:50.579865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.504 [2024-07-26 18:33:50.579890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.504 [2024-07-26 18:33:50.579905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.504 [2024-07-26 18:33:50.579919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.504 [2024-07-26 18:33:50.579961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.504 qpair failed and we were unable to recover it. 00:33:24.504 [2024-07-26 18:33:50.589678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.504 [2024-07-26 18:33:50.589812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.504 [2024-07-26 18:33:50.589838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.504 [2024-07-26 18:33:50.589852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.504 [2024-07-26 18:33:50.589866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.504 [2024-07-26 18:33:50.589896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.504 qpair failed and we were unable to recover it. 00:33:24.504 [2024-07-26 18:33:50.599720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.504 [2024-07-26 18:33:50.599855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.504 [2024-07-26 18:33:50.599881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.504 [2024-07-26 18:33:50.599896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.504 [2024-07-26 18:33:50.599910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.504 [2024-07-26 18:33:50.599941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.504 qpair failed and we were unable to recover it. 00:33:24.504 [2024-07-26 18:33:50.609755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.504 [2024-07-26 18:33:50.609895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.504 [2024-07-26 18:33:50.609925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.504 [2024-07-26 18:33:50.609941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.504 [2024-07-26 18:33:50.609954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.504 [2024-07-26 18:33:50.609985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.504 qpair failed and we were unable to recover it. 00:33:24.504 [2024-07-26 18:33:50.619804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.505 [2024-07-26 18:33:50.619975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.505 [2024-07-26 18:33:50.620001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.505 [2024-07-26 18:33:50.620016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.505 [2024-07-26 18:33:50.620029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.505 [2024-07-26 18:33:50.620068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.505 qpair failed and we were unable to recover it. 00:33:24.505 [2024-07-26 18:33:50.629779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.505 [2024-07-26 18:33:50.629915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.505 [2024-07-26 18:33:50.629941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.505 [2024-07-26 18:33:50.629956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.505 [2024-07-26 18:33:50.629969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.505 [2024-07-26 18:33:50.630000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.505 qpair failed and we were unable to recover it. 00:33:24.505 [2024-07-26 18:33:50.639842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.505 [2024-07-26 18:33:50.639971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.505 [2024-07-26 18:33:50.639997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.505 [2024-07-26 18:33:50.640011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.505 [2024-07-26 18:33:50.640024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.505 [2024-07-26 18:33:50.640055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.505 qpair failed and we were unable to recover it. 00:33:24.766 [2024-07-26 18:33:50.649888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.766 [2024-07-26 18:33:50.650077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.766 [2024-07-26 18:33:50.650103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.766 [2024-07-26 18:33:50.650117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.766 [2024-07-26 18:33:50.650131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.766 [2024-07-26 18:33:50.650168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.766 qpair failed and we were unable to recover it. 00:33:24.766 [2024-07-26 18:33:50.659940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.766 [2024-07-26 18:33:50.660085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.766 [2024-07-26 18:33:50.660111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.766 [2024-07-26 18:33:50.660127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.766 [2024-07-26 18:33:50.660140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.766 [2024-07-26 18:33:50.660171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.766 qpair failed and we were unable to recover it. 00:33:24.766 [2024-07-26 18:33:50.669930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.766 [2024-07-26 18:33:50.670080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.766 [2024-07-26 18:33:50.670106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.766 [2024-07-26 18:33:50.670121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.766 [2024-07-26 18:33:50.670134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.766 [2024-07-26 18:33:50.670165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.766 qpair failed and we were unable to recover it. 00:33:24.766 [2024-07-26 18:33:50.679923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.766 [2024-07-26 18:33:50.680066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.766 [2024-07-26 18:33:50.680092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.766 [2024-07-26 18:33:50.680107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.766 [2024-07-26 18:33:50.680121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.766 [2024-07-26 18:33:50.680152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.766 qpair failed and we were unable to recover it. 00:33:24.766 [2024-07-26 18:33:50.690072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.766 [2024-07-26 18:33:50.690211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.766 [2024-07-26 18:33:50.690237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.766 [2024-07-26 18:33:50.690252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.766 [2024-07-26 18:33:50.690266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.766 [2024-07-26 18:33:50.690296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.766 qpair failed and we were unable to recover it. 00:33:24.766 [2024-07-26 18:33:50.699981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.766 [2024-07-26 18:33:50.700125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.766 [2024-07-26 18:33:50.700156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.766 [2024-07-26 18:33:50.700172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.766 [2024-07-26 18:33:50.700185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.766 [2024-07-26 18:33:50.700216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.766 qpair failed and we were unable to recover it. 00:33:24.766 [2024-07-26 18:33:50.710101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.767 [2024-07-26 18:33:50.710259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.767 [2024-07-26 18:33:50.710285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.767 [2024-07-26 18:33:50.710300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.767 [2024-07-26 18:33:50.710313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.767 [2024-07-26 18:33:50.710344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.767 qpair failed and we were unable to recover it. 00:33:24.767 [2024-07-26 18:33:50.720050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.767 [2024-07-26 18:33:50.720195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.767 [2024-07-26 18:33:50.720220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.767 [2024-07-26 18:33:50.720235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.767 [2024-07-26 18:33:50.720248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.767 [2024-07-26 18:33:50.720279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.767 qpair failed and we were unable to recover it. 00:33:24.767 [2024-07-26 18:33:50.730097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.767 [2024-07-26 18:33:50.730238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.767 [2024-07-26 18:33:50.730265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.767 [2024-07-26 18:33:50.730280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.767 [2024-07-26 18:33:50.730294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.767 [2024-07-26 18:33:50.730337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.767 qpair failed and we were unable to recover it. 00:33:24.767 [2024-07-26 18:33:50.740095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.767 [2024-07-26 18:33:50.740238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.767 [2024-07-26 18:33:50.740264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.767 [2024-07-26 18:33:50.740279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.767 [2024-07-26 18:33:50.740298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.767 [2024-07-26 18:33:50.740330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.767 qpair failed and we were unable to recover it. 00:33:24.767 [2024-07-26 18:33:50.750154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.767 [2024-07-26 18:33:50.750336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.767 [2024-07-26 18:33:50.750373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.767 [2024-07-26 18:33:50.750388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.767 [2024-07-26 18:33:50.750402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.767 [2024-07-26 18:33:50.750432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.767 qpair failed and we were unable to recover it. 00:33:24.767 [2024-07-26 18:33:50.760192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.767 [2024-07-26 18:33:50.760335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.767 [2024-07-26 18:33:50.760366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.767 [2024-07-26 18:33:50.760380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.767 [2024-07-26 18:33:50.760395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.767 [2024-07-26 18:33:50.760427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.767 qpair failed and we were unable to recover it. 00:33:24.767 [2024-07-26 18:33:50.770213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.767 [2024-07-26 18:33:50.770354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.767 [2024-07-26 18:33:50.770379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.767 [2024-07-26 18:33:50.770394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.767 [2024-07-26 18:33:50.770407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.767 [2024-07-26 18:33:50.770440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.767 qpair failed and we were unable to recover it. 00:33:24.767 [2024-07-26 18:33:50.780268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.767 [2024-07-26 18:33:50.780444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.767 [2024-07-26 18:33:50.780470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.767 [2024-07-26 18:33:50.780485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.767 [2024-07-26 18:33:50.780498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.767 [2024-07-26 18:33:50.780529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.767 qpair failed and we were unable to recover it. 00:33:24.767 [2024-07-26 18:33:50.790275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.767 [2024-07-26 18:33:50.790418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.767 [2024-07-26 18:33:50.790443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.767 [2024-07-26 18:33:50.790458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.767 [2024-07-26 18:33:50.790471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.767 [2024-07-26 18:33:50.790503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.767 qpair failed and we were unable to recover it. 00:33:24.767 [2024-07-26 18:33:50.800317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.767 [2024-07-26 18:33:50.800454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.767 [2024-07-26 18:33:50.800479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.767 [2024-07-26 18:33:50.800494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.767 [2024-07-26 18:33:50.800506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.767 [2024-07-26 18:33:50.800539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.767 qpair failed and we were unable to recover it. 00:33:24.767 [2024-07-26 18:33:50.810311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.767 [2024-07-26 18:33:50.810463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.767 [2024-07-26 18:33:50.810489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.767 [2024-07-26 18:33:50.810505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.767 [2024-07-26 18:33:50.810518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.767 [2024-07-26 18:33:50.810548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.767 qpair failed and we were unable to recover it. 00:33:24.767 [2024-07-26 18:33:50.820383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.767 [2024-07-26 18:33:50.820569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.767 [2024-07-26 18:33:50.820594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.767 [2024-07-26 18:33:50.820609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.767 [2024-07-26 18:33:50.820623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.767 [2024-07-26 18:33:50.820652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.767 qpair failed and we were unable to recover it. 00:33:24.767 [2024-07-26 18:33:50.830404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.767 [2024-07-26 18:33:50.830541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.768 [2024-07-26 18:33:50.830566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.768 [2024-07-26 18:33:50.830587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.768 [2024-07-26 18:33:50.830601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.768 [2024-07-26 18:33:50.830632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.768 qpair failed and we were unable to recover it. 00:33:24.768 [2024-07-26 18:33:50.840421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.768 [2024-07-26 18:33:50.840563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.768 [2024-07-26 18:33:50.840588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.768 [2024-07-26 18:33:50.840603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.768 [2024-07-26 18:33:50.840616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.768 [2024-07-26 18:33:50.840649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.768 qpair failed and we were unable to recover it. 00:33:24.768 [2024-07-26 18:33:50.850419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.768 [2024-07-26 18:33:50.850564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.768 [2024-07-26 18:33:50.850590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.768 [2024-07-26 18:33:50.850605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.768 [2024-07-26 18:33:50.850618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.768 [2024-07-26 18:33:50.850649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.768 qpair failed and we were unable to recover it. 00:33:24.768 [2024-07-26 18:33:50.860480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.768 [2024-07-26 18:33:50.860663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.768 [2024-07-26 18:33:50.860688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.768 [2024-07-26 18:33:50.860702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.768 [2024-07-26 18:33:50.860716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.768 [2024-07-26 18:33:50.860748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.768 qpair failed and we were unable to recover it. 00:33:24.768 [2024-07-26 18:33:50.870557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.768 [2024-07-26 18:33:50.870691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.768 [2024-07-26 18:33:50.870717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.768 [2024-07-26 18:33:50.870732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.768 [2024-07-26 18:33:50.870745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.768 [2024-07-26 18:33:50.870789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.768 qpair failed and we were unable to recover it. 00:33:24.768 [2024-07-26 18:33:50.880527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.768 [2024-07-26 18:33:50.880662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.768 [2024-07-26 18:33:50.880689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.768 [2024-07-26 18:33:50.880704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.768 [2024-07-26 18:33:50.880717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.768 [2024-07-26 18:33:50.880759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.768 qpair failed and we were unable to recover it. 00:33:24.768 [2024-07-26 18:33:50.890632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.768 [2024-07-26 18:33:50.890779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.768 [2024-07-26 18:33:50.890804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.768 [2024-07-26 18:33:50.890820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.768 [2024-07-26 18:33:50.890833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.768 [2024-07-26 18:33:50.890863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.768 qpair failed and we were unable to recover it. 00:33:24.768 [2024-07-26 18:33:50.900587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.768 [2024-07-26 18:33:50.900728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.768 [2024-07-26 18:33:50.900754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.768 [2024-07-26 18:33:50.900768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.768 [2024-07-26 18:33:50.900782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:24.768 [2024-07-26 18:33:50.900812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.768 qpair failed and we were unable to recover it. 00:33:25.031 [2024-07-26 18:33:50.910594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.031 [2024-07-26 18:33:50.910726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.031 [2024-07-26 18:33:50.910752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.031 [2024-07-26 18:33:50.910767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.031 [2024-07-26 18:33:50.910780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.031 [2024-07-26 18:33:50.910810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.031 qpair failed and we were unable to recover it. 00:33:25.031 [2024-07-26 18:33:50.920636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.031 [2024-07-26 18:33:50.920830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.031 [2024-07-26 18:33:50.920855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.031 [2024-07-26 18:33:50.920879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.031 [2024-07-26 18:33:50.920894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.031 [2024-07-26 18:33:50.920924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.031 qpair failed and we were unable to recover it. 00:33:25.031 [2024-07-26 18:33:50.930698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.031 [2024-07-26 18:33:50.930836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.031 [2024-07-26 18:33:50.930862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.031 [2024-07-26 18:33:50.930877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.031 [2024-07-26 18:33:50.930890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.031 [2024-07-26 18:33:50.930920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.031 qpair failed and we were unable to recover it. 00:33:25.031 [2024-07-26 18:33:50.940685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.031 [2024-07-26 18:33:50.940831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.031 [2024-07-26 18:33:50.940857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.031 [2024-07-26 18:33:50.940872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.031 [2024-07-26 18:33:50.940885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.031 [2024-07-26 18:33:50.940915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.031 qpair failed and we were unable to recover it. 00:33:25.031 [2024-07-26 18:33:50.950718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.031 [2024-07-26 18:33:50.950905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.031 [2024-07-26 18:33:50.950930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.031 [2024-07-26 18:33:50.950945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.031 [2024-07-26 18:33:50.950958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.031 [2024-07-26 18:33:50.950988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.031 qpair failed and we were unable to recover it. 00:33:25.031 [2024-07-26 18:33:50.960845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.031 [2024-07-26 18:33:50.961013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.031 [2024-07-26 18:33:50.961038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.031 [2024-07-26 18:33:50.961053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.031 [2024-07-26 18:33:50.961079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.031 [2024-07-26 18:33:50.961123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.031 qpair failed and we were unable to recover it. 00:33:25.031 [2024-07-26 18:33:50.970770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.031 [2024-07-26 18:33:50.970936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.031 [2024-07-26 18:33:50.970963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.031 [2024-07-26 18:33:50.970979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.031 [2024-07-26 18:33:50.970997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.031 [2024-07-26 18:33:50.971030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.031 qpair failed and we were unable to recover it. 00:33:25.031 [2024-07-26 18:33:50.980820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.031 [2024-07-26 18:33:50.980987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.032 [2024-07-26 18:33:50.981013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.032 [2024-07-26 18:33:50.981028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.032 [2024-07-26 18:33:50.981041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.032 [2024-07-26 18:33:50.981079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.032 qpair failed and we were unable to recover it. 00:33:25.032 [2024-07-26 18:33:50.990814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.032 [2024-07-26 18:33:50.990963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.032 [2024-07-26 18:33:50.990988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.032 [2024-07-26 18:33:50.991003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.032 [2024-07-26 18:33:50.991017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.032 [2024-07-26 18:33:50.991047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.032 qpair failed and we were unable to recover it. 00:33:25.032 [2024-07-26 18:33:51.000866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.032 [2024-07-26 18:33:51.001003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.032 [2024-07-26 18:33:51.001028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.032 [2024-07-26 18:33:51.001044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.032 [2024-07-26 18:33:51.001057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.032 [2024-07-26 18:33:51.001100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.032 qpair failed and we were unable to recover it. 00:33:25.032 [2024-07-26 18:33:51.010871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.032 [2024-07-26 18:33:51.011006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.032 [2024-07-26 18:33:51.011036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.032 [2024-07-26 18:33:51.011052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.032 [2024-07-26 18:33:51.011074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.032 [2024-07-26 18:33:51.011105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.032 qpair failed and we were unable to recover it. 00:33:25.032 [2024-07-26 18:33:51.020902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.032 [2024-07-26 18:33:51.021086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.032 [2024-07-26 18:33:51.021112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.032 [2024-07-26 18:33:51.021127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.032 [2024-07-26 18:33:51.021140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.032 [2024-07-26 18:33:51.021170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.032 qpair failed and we were unable to recover it. 00:33:25.032 [2024-07-26 18:33:51.030956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.032 [2024-07-26 18:33:51.031134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.032 [2024-07-26 18:33:51.031160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.032 [2024-07-26 18:33:51.031174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.032 [2024-07-26 18:33:51.031188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.032 [2024-07-26 18:33:51.031230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.032 qpair failed and we were unable to recover it. 00:33:25.032 [2024-07-26 18:33:51.040986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.032 [2024-07-26 18:33:51.041130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.032 [2024-07-26 18:33:51.041156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.032 [2024-07-26 18:33:51.041170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.032 [2024-07-26 18:33:51.041184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.032 [2024-07-26 18:33:51.041214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.032 qpair failed and we were unable to recover it. 00:33:25.032 [2024-07-26 18:33:51.051026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.032 [2024-07-26 18:33:51.051183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.032 [2024-07-26 18:33:51.051209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.032 [2024-07-26 18:33:51.051224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.032 [2024-07-26 18:33:51.051238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.032 [2024-07-26 18:33:51.051274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.032 qpair failed and we were unable to recover it. 00:33:25.032 [2024-07-26 18:33:51.061029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.032 [2024-07-26 18:33:51.061201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.032 [2024-07-26 18:33:51.061227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.032 [2024-07-26 18:33:51.061243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.032 [2024-07-26 18:33:51.061256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.032 [2024-07-26 18:33:51.061286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.032 qpair failed and we were unable to recover it. 00:33:25.032 [2024-07-26 18:33:51.071040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.032 [2024-07-26 18:33:51.071177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.032 [2024-07-26 18:33:51.071202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.032 [2024-07-26 18:33:51.071217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.032 [2024-07-26 18:33:51.071231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.032 [2024-07-26 18:33:51.071261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.032 qpair failed and we were unable to recover it. 00:33:25.032 [2024-07-26 18:33:51.081076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.032 [2024-07-26 18:33:51.081210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.032 [2024-07-26 18:33:51.081236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.032 [2024-07-26 18:33:51.081250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.032 [2024-07-26 18:33:51.081264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.032 [2024-07-26 18:33:51.081294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.032 qpair failed and we were unable to recover it. 00:33:25.032 [2024-07-26 18:33:51.091115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.032 [2024-07-26 18:33:51.091249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.032 [2024-07-26 18:33:51.091275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.032 [2024-07-26 18:33:51.091290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.032 [2024-07-26 18:33:51.091303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.032 [2024-07-26 18:33:51.091335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.032 qpair failed and we were unable to recover it. 00:33:25.032 [2024-07-26 18:33:51.101179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.032 [2024-07-26 18:33:51.101324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.033 [2024-07-26 18:33:51.101355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.033 [2024-07-26 18:33:51.101371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.033 [2024-07-26 18:33:51.101385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.033 [2024-07-26 18:33:51.101415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.033 qpair failed and we were unable to recover it. 00:33:25.033 [2024-07-26 18:33:51.111190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.033 [2024-07-26 18:33:51.111355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.033 [2024-07-26 18:33:51.111382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.033 [2024-07-26 18:33:51.111397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.033 [2024-07-26 18:33:51.111415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.033 [2024-07-26 18:33:51.111448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.033 qpair failed and we were unable to recover it. 00:33:25.033 [2024-07-26 18:33:51.121247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.033 [2024-07-26 18:33:51.121381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.033 [2024-07-26 18:33:51.121408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.033 [2024-07-26 18:33:51.121423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.033 [2024-07-26 18:33:51.121437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.033 [2024-07-26 18:33:51.121467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.033 qpair failed and we were unable to recover it. 00:33:25.033 [2024-07-26 18:33:51.131213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.033 [2024-07-26 18:33:51.131357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.033 [2024-07-26 18:33:51.131383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.033 [2024-07-26 18:33:51.131398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.033 [2024-07-26 18:33:51.131412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.033 [2024-07-26 18:33:51.131443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.033 qpair failed and we were unable to recover it. 00:33:25.033 [2024-07-26 18:33:51.141250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.033 [2024-07-26 18:33:51.141432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.033 [2024-07-26 18:33:51.141458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.033 [2024-07-26 18:33:51.141474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.033 [2024-07-26 18:33:51.141492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.033 [2024-07-26 18:33:51.141524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.033 qpair failed and we were unable to recover it. 00:33:25.033 [2024-07-26 18:33:51.151268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.033 [2024-07-26 18:33:51.151411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.033 [2024-07-26 18:33:51.151437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.033 [2024-07-26 18:33:51.151452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.033 [2024-07-26 18:33:51.151466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.033 [2024-07-26 18:33:51.151496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.033 qpair failed and we were unable to recover it. 00:33:25.033 [2024-07-26 18:33:51.161321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.033 [2024-07-26 18:33:51.161456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.033 [2024-07-26 18:33:51.161482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.033 [2024-07-26 18:33:51.161497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.033 [2024-07-26 18:33:51.161510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.033 [2024-07-26 18:33:51.161541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.033 qpair failed and we were unable to recover it. 00:33:25.033 [2024-07-26 18:33:51.171370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.033 [2024-07-26 18:33:51.171512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.033 [2024-07-26 18:33:51.171537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.033 [2024-07-26 18:33:51.171552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.033 [2024-07-26 18:33:51.171566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.033 [2024-07-26 18:33:51.171596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.033 qpair failed and we were unable to recover it. 00:33:25.294 [2024-07-26 18:33:51.181391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.294 [2024-07-26 18:33:51.181530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.294 [2024-07-26 18:33:51.181555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.294 [2024-07-26 18:33:51.181570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.294 [2024-07-26 18:33:51.181584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.294 [2024-07-26 18:33:51.181626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.294 qpair failed and we were unable to recover it. 00:33:25.294 [2024-07-26 18:33:51.191403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.294 [2024-07-26 18:33:51.191542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.294 [2024-07-26 18:33:51.191568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.294 [2024-07-26 18:33:51.191583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.294 [2024-07-26 18:33:51.191595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.294 [2024-07-26 18:33:51.191625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.294 qpair failed and we were unable to recover it. 00:33:25.294 [2024-07-26 18:33:51.201420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.294 [2024-07-26 18:33:51.201552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.294 [2024-07-26 18:33:51.201577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.294 [2024-07-26 18:33:51.201591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.294 [2024-07-26 18:33:51.201605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.294 [2024-07-26 18:33:51.201636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.294 qpair failed and we were unable to recover it. 00:33:25.294 [2024-07-26 18:33:51.211509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.294 [2024-07-26 18:33:51.211667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.294 [2024-07-26 18:33:51.211692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.294 [2024-07-26 18:33:51.211707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.294 [2024-07-26 18:33:51.211720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.294 [2024-07-26 18:33:51.211751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.294 qpair failed and we were unable to recover it. 00:33:25.294 [2024-07-26 18:33:51.221529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.294 [2024-07-26 18:33:51.221674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.295 [2024-07-26 18:33:51.221699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.295 [2024-07-26 18:33:51.221714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.295 [2024-07-26 18:33:51.221728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.295 [2024-07-26 18:33:51.221758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.295 qpair failed and we were unable to recover it. 00:33:25.295 [2024-07-26 18:33:51.231518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.295 [2024-07-26 18:33:51.231661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.295 [2024-07-26 18:33:51.231686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.295 [2024-07-26 18:33:51.231701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.295 [2024-07-26 18:33:51.231721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.295 [2024-07-26 18:33:51.231752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.295 qpair failed and we were unable to recover it. 00:33:25.295 [2024-07-26 18:33:51.241563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.295 [2024-07-26 18:33:51.241697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.295 [2024-07-26 18:33:51.241722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.295 [2024-07-26 18:33:51.241736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.295 [2024-07-26 18:33:51.241749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.295 [2024-07-26 18:33:51.241778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.295 qpair failed and we were unable to recover it. 00:33:25.295 [2024-07-26 18:33:51.251591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.295 [2024-07-26 18:33:51.251739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.295 [2024-07-26 18:33:51.251765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.295 [2024-07-26 18:33:51.251780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.295 [2024-07-26 18:33:51.251797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.295 [2024-07-26 18:33:51.251828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.295 qpair failed and we were unable to recover it. 00:33:25.295 [2024-07-26 18:33:51.261624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.295 [2024-07-26 18:33:51.261799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.295 [2024-07-26 18:33:51.261825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.295 [2024-07-26 18:33:51.261840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.295 [2024-07-26 18:33:51.261853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.295 [2024-07-26 18:33:51.261883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.295 qpair failed and we were unable to recover it. 00:33:25.295 [2024-07-26 18:33:51.271689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.295 [2024-07-26 18:33:51.271868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.295 [2024-07-26 18:33:51.271896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.295 [2024-07-26 18:33:51.271915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.295 [2024-07-26 18:33:51.271929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.295 [2024-07-26 18:33:51.271961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.295 qpair failed and we were unable to recover it. 00:33:25.295 [2024-07-26 18:33:51.281673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.295 [2024-07-26 18:33:51.281847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.295 [2024-07-26 18:33:51.281874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.295 [2024-07-26 18:33:51.281888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.295 [2024-07-26 18:33:51.281902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.295 [2024-07-26 18:33:51.281932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.295 qpair failed and we were unable to recover it. 00:33:25.295 [2024-07-26 18:33:51.291735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.295 [2024-07-26 18:33:51.291895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.295 [2024-07-26 18:33:51.291921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.295 [2024-07-26 18:33:51.291936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.295 [2024-07-26 18:33:51.291950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.295 [2024-07-26 18:33:51.291981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.295 qpair failed and we were unable to recover it. 00:33:25.295 [2024-07-26 18:33:51.301740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.295 [2024-07-26 18:33:51.301892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.295 [2024-07-26 18:33:51.301917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.295 [2024-07-26 18:33:51.301932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.295 [2024-07-26 18:33:51.301946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.295 [2024-07-26 18:33:51.301976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.295 qpair failed and we were unable to recover it. 00:33:25.295 [2024-07-26 18:33:51.311721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.295 [2024-07-26 18:33:51.311857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.295 [2024-07-26 18:33:51.311882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.295 [2024-07-26 18:33:51.311898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.295 [2024-07-26 18:33:51.311911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.295 [2024-07-26 18:33:51.311943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.295 qpair failed and we were unable to recover it. 00:33:25.295 [2024-07-26 18:33:51.321793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.295 [2024-07-26 18:33:51.321925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.295 [2024-07-26 18:33:51.321951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.295 [2024-07-26 18:33:51.321971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.295 [2024-07-26 18:33:51.321985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.295 [2024-07-26 18:33:51.322016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.295 qpair failed and we were unable to recover it. 00:33:25.295 [2024-07-26 18:33:51.331813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.295 [2024-07-26 18:33:51.331954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.295 [2024-07-26 18:33:51.331980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.295 [2024-07-26 18:33:51.331995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.295 [2024-07-26 18:33:51.332009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.295 [2024-07-26 18:33:51.332051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.295 qpair failed and we were unable to recover it. 00:33:25.295 [2024-07-26 18:33:51.341837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.295 [2024-07-26 18:33:51.341973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.296 [2024-07-26 18:33:51.341999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.296 [2024-07-26 18:33:51.342014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.296 [2024-07-26 18:33:51.342028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.296 [2024-07-26 18:33:51.342066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.296 qpair failed and we were unable to recover it. 00:33:25.296 [2024-07-26 18:33:51.351842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.296 [2024-07-26 18:33:51.351976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.296 [2024-07-26 18:33:51.352002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.296 [2024-07-26 18:33:51.352017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.296 [2024-07-26 18:33:51.352031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.296 [2024-07-26 18:33:51.352071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.296 qpair failed and we were unable to recover it. 00:33:25.296 [2024-07-26 18:33:51.361912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.296 [2024-07-26 18:33:51.362112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.296 [2024-07-26 18:33:51.362138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.296 [2024-07-26 18:33:51.362153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.296 [2024-07-26 18:33:51.362167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.296 [2024-07-26 18:33:51.362199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.296 qpair failed and we were unable to recover it. 00:33:25.296 [2024-07-26 18:33:51.371907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.296 [2024-07-26 18:33:51.372064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.296 [2024-07-26 18:33:51.372090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.296 [2024-07-26 18:33:51.372105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.296 [2024-07-26 18:33:51.372119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.296 [2024-07-26 18:33:51.372150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.296 qpair failed and we were unable to recover it. 00:33:25.296 [2024-07-26 18:33:51.381924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.296 [2024-07-26 18:33:51.382057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.296 [2024-07-26 18:33:51.382090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.296 [2024-07-26 18:33:51.382104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.296 [2024-07-26 18:33:51.382118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.296 [2024-07-26 18:33:51.382147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.296 qpair failed and we were unable to recover it. 00:33:25.296 [2024-07-26 18:33:51.391993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.296 [2024-07-26 18:33:51.392171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.296 [2024-07-26 18:33:51.392197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.296 [2024-07-26 18:33:51.392212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.296 [2024-07-26 18:33:51.392225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.296 [2024-07-26 18:33:51.392256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.296 qpair failed and we were unable to recover it. 00:33:25.296 [2024-07-26 18:33:51.401992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.296 [2024-07-26 18:33:51.402141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.296 [2024-07-26 18:33:51.402168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.296 [2024-07-26 18:33:51.402186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.296 [2024-07-26 18:33:51.402200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.296 [2024-07-26 18:33:51.402230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.296 qpair failed and we were unable to recover it. 00:33:25.296 [2024-07-26 18:33:51.412099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.296 [2024-07-26 18:33:51.412243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.296 [2024-07-26 18:33:51.412274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.296 [2024-07-26 18:33:51.412292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.296 [2024-07-26 18:33:51.412306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.296 [2024-07-26 18:33:51.412349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.296 qpair failed and we were unable to recover it. 00:33:25.296 [2024-07-26 18:33:51.422043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.296 [2024-07-26 18:33:51.422195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.296 [2024-07-26 18:33:51.422222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.296 [2024-07-26 18:33:51.422237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.296 [2024-07-26 18:33:51.422250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.296 [2024-07-26 18:33:51.422283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.296 qpair failed and we were unable to recover it. 00:33:25.296 [2024-07-26 18:33:51.432136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.296 [2024-07-26 18:33:51.432271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.296 [2024-07-26 18:33:51.432298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.296 [2024-07-26 18:33:51.432313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.296 [2024-07-26 18:33:51.432327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.296 [2024-07-26 18:33:51.432360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.296 qpair failed and we were unable to recover it. 00:33:25.558 [2024-07-26 18:33:51.442108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.558 [2024-07-26 18:33:51.442289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.558 [2024-07-26 18:33:51.442315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.558 [2024-07-26 18:33:51.442330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.558 [2024-07-26 18:33:51.442344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.558 [2024-07-26 18:33:51.442376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.558 qpair failed and we were unable to recover it. 00:33:25.558 [2024-07-26 18:33:51.452149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.558 [2024-07-26 18:33:51.452293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.558 [2024-07-26 18:33:51.452319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.558 [2024-07-26 18:33:51.452334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.558 [2024-07-26 18:33:51.452347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.558 [2024-07-26 18:33:51.452385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.558 qpair failed and we were unable to recover it. 00:33:25.558 [2024-07-26 18:33:51.462206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.558 [2024-07-26 18:33:51.462342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.558 [2024-07-26 18:33:51.462368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.558 [2024-07-26 18:33:51.462383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.558 [2024-07-26 18:33:51.462397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.558 [2024-07-26 18:33:51.462426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.558 qpair failed and we were unable to recover it. 00:33:25.559 [2024-07-26 18:33:51.472207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.559 [2024-07-26 18:33:51.472342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.559 [2024-07-26 18:33:51.472367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.559 [2024-07-26 18:33:51.472382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.559 [2024-07-26 18:33:51.472395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.559 [2024-07-26 18:33:51.472425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.559 qpair failed and we were unable to recover it. 00:33:25.559 [2024-07-26 18:33:51.482247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.559 [2024-07-26 18:33:51.482426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.559 [2024-07-26 18:33:51.482452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.559 [2024-07-26 18:33:51.482467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.559 [2024-07-26 18:33:51.482480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.559 [2024-07-26 18:33:51.482522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.559 qpair failed and we were unable to recover it. 00:33:25.559 [2024-07-26 18:33:51.492264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.559 [2024-07-26 18:33:51.492405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.559 [2024-07-26 18:33:51.492430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.559 [2024-07-26 18:33:51.492445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.559 [2024-07-26 18:33:51.492459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.559 [2024-07-26 18:33:51.492488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.559 qpair failed and we were unable to recover it. 00:33:25.559 [2024-07-26 18:33:51.502308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.559 [2024-07-26 18:33:51.502457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.559 [2024-07-26 18:33:51.502487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.559 [2024-07-26 18:33:51.502503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.559 [2024-07-26 18:33:51.502516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.559 [2024-07-26 18:33:51.502548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.559 qpair failed and we were unable to recover it. 00:33:25.559 [2024-07-26 18:33:51.512341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.559 [2024-07-26 18:33:51.512481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.559 [2024-07-26 18:33:51.512506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.559 [2024-07-26 18:33:51.512522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.559 [2024-07-26 18:33:51.512535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.559 [2024-07-26 18:33:51.512565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.559 qpair failed and we were unable to recover it. 00:33:25.559 [2024-07-26 18:33:51.522379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.559 [2024-07-26 18:33:51.522551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.559 [2024-07-26 18:33:51.522577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.559 [2024-07-26 18:33:51.522592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.559 [2024-07-26 18:33:51.522605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.559 [2024-07-26 18:33:51.522636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.559 qpair failed and we were unable to recover it. 00:33:25.559 [2024-07-26 18:33:51.532410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.559 [2024-07-26 18:33:51.532548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.559 [2024-07-26 18:33:51.532574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.559 [2024-07-26 18:33:51.532588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.559 [2024-07-26 18:33:51.532602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.559 [2024-07-26 18:33:51.532631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.559 qpair failed and we were unable to recover it. 00:33:25.559 [2024-07-26 18:33:51.542433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.559 [2024-07-26 18:33:51.542614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.559 [2024-07-26 18:33:51.542640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.559 [2024-07-26 18:33:51.542655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.559 [2024-07-26 18:33:51.542669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.559 [2024-07-26 18:33:51.542705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.559 qpair failed and we were unable to recover it. 00:33:25.559 [2024-07-26 18:33:51.552406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.559 [2024-07-26 18:33:51.552540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.559 [2024-07-26 18:33:51.552565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.559 [2024-07-26 18:33:51.552580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.559 [2024-07-26 18:33:51.552594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.559 [2024-07-26 18:33:51.552626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.559 qpair failed and we were unable to recover it. 00:33:25.559 [2024-07-26 18:33:51.562481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.559 [2024-07-26 18:33:51.562630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.559 [2024-07-26 18:33:51.562655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.559 [2024-07-26 18:33:51.562670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.559 [2024-07-26 18:33:51.562684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.559 [2024-07-26 18:33:51.562714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.559 qpair failed and we were unable to recover it. 00:33:25.559 [2024-07-26 18:33:51.572493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.559 [2024-07-26 18:33:51.572631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.559 [2024-07-26 18:33:51.572657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.559 [2024-07-26 18:33:51.572671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.559 [2024-07-26 18:33:51.572685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.559 [2024-07-26 18:33:51.572715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.559 qpair failed and we were unable to recover it. 00:33:25.559 [2024-07-26 18:33:51.582499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.559 [2024-07-26 18:33:51.582662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.559 [2024-07-26 18:33:51.582688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.559 [2024-07-26 18:33:51.582703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.560 [2024-07-26 18:33:51.582717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.560 [2024-07-26 18:33:51.582747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.560 qpair failed and we were unable to recover it. 00:33:25.560 [2024-07-26 18:33:51.592558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.560 [2024-07-26 18:33:51.592702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.560 [2024-07-26 18:33:51.592728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.560 [2024-07-26 18:33:51.592743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.560 [2024-07-26 18:33:51.592756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.560 [2024-07-26 18:33:51.592786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.560 qpair failed and we were unable to recover it. 00:33:25.560 [2024-07-26 18:33:51.602593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.560 [2024-07-26 18:33:51.602759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.560 [2024-07-26 18:33:51.602784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.560 [2024-07-26 18:33:51.602799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.560 [2024-07-26 18:33:51.602813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.560 [2024-07-26 18:33:51.602843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.560 qpair failed and we were unable to recover it. 00:33:25.560 [2024-07-26 18:33:51.612585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.560 [2024-07-26 18:33:51.612721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.560 [2024-07-26 18:33:51.612747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.560 [2024-07-26 18:33:51.612762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.560 [2024-07-26 18:33:51.612776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.560 [2024-07-26 18:33:51.612808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.560 qpair failed and we were unable to recover it. 00:33:25.560 [2024-07-26 18:33:51.622643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.560 [2024-07-26 18:33:51.622790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.560 [2024-07-26 18:33:51.622816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.560 [2024-07-26 18:33:51.622830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.560 [2024-07-26 18:33:51.622844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.560 [2024-07-26 18:33:51.622875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.560 qpair failed and we were unable to recover it. 00:33:25.560 [2024-07-26 18:33:51.632651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.560 [2024-07-26 18:33:51.632817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.560 [2024-07-26 18:33:51.632843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.560 [2024-07-26 18:33:51.632858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.560 [2024-07-26 18:33:51.632876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.560 [2024-07-26 18:33:51.632906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.560 qpair failed and we were unable to recover it. 00:33:25.560 [2024-07-26 18:33:51.642675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.560 [2024-07-26 18:33:51.642857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.560 [2024-07-26 18:33:51.642882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.560 [2024-07-26 18:33:51.642897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.560 [2024-07-26 18:33:51.642910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.560 [2024-07-26 18:33:51.642941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.560 qpair failed and we were unable to recover it. 00:33:25.560 [2024-07-26 18:33:51.652758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.560 [2024-07-26 18:33:51.652926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.560 [2024-07-26 18:33:51.652952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.560 [2024-07-26 18:33:51.652967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.560 [2024-07-26 18:33:51.652980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.560 [2024-07-26 18:33:51.653013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.560 qpair failed and we were unable to recover it. 00:33:25.560 [2024-07-26 18:33:51.662791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.560 [2024-07-26 18:33:51.662943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.560 [2024-07-26 18:33:51.662970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.560 [2024-07-26 18:33:51.662985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.560 [2024-07-26 18:33:51.662999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.560 [2024-07-26 18:33:51.663041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.560 qpair failed and we were unable to recover it. 00:33:25.560 [2024-07-26 18:33:51.672818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.560 [2024-07-26 18:33:51.672992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.560 [2024-07-26 18:33:51.673017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.560 [2024-07-26 18:33:51.673033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.560 [2024-07-26 18:33:51.673046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.560 [2024-07-26 18:33:51.673086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.560 qpair failed and we were unable to recover it. 00:33:25.560 [2024-07-26 18:33:51.682813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.560 [2024-07-26 18:33:51.682950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.560 [2024-07-26 18:33:51.682977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.560 [2024-07-26 18:33:51.682992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.560 [2024-07-26 18:33:51.683005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.560 [2024-07-26 18:33:51.683037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.560 qpair failed and we were unable to recover it. 00:33:25.560 [2024-07-26 18:33:51.692822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.560 [2024-07-26 18:33:51.692959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.561 [2024-07-26 18:33:51.692985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.561 [2024-07-26 18:33:51.692999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.561 [2024-07-26 18:33:51.693013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.561 [2024-07-26 18:33:51.693044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.561 qpair failed and we were unable to recover it. 00:33:25.822 [2024-07-26 18:33:51.702841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.822 [2024-07-26 18:33:51.702980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.822 [2024-07-26 18:33:51.703006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.822 [2024-07-26 18:33:51.703021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.822 [2024-07-26 18:33:51.703034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.822 [2024-07-26 18:33:51.703087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.822 qpair failed and we were unable to recover it. 00:33:25.822 [2024-07-26 18:33:51.712962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.822 [2024-07-26 18:33:51.713103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.822 [2024-07-26 18:33:51.713130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.822 [2024-07-26 18:33:51.713145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.822 [2024-07-26 18:33:51.713159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.822 [2024-07-26 18:33:51.713201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.822 qpair failed and we were unable to recover it. 00:33:25.822 [2024-07-26 18:33:51.722938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.822 [2024-07-26 18:33:51.723076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.822 [2024-07-26 18:33:51.723102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.822 [2024-07-26 18:33:51.723123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.822 [2024-07-26 18:33:51.723137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.822 [2024-07-26 18:33:51.723169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.822 qpair failed and we were unable to recover it. 00:33:25.822 [2024-07-26 18:33:51.732984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.822 [2024-07-26 18:33:51.733144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.822 [2024-07-26 18:33:51.733170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.822 [2024-07-26 18:33:51.733184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.822 [2024-07-26 18:33:51.733198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.822 [2024-07-26 18:33:51.733228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.822 qpair failed and we were unable to recover it. 00:33:25.822 [2024-07-26 18:33:51.743049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.822 [2024-07-26 18:33:51.743202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.822 [2024-07-26 18:33:51.743228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.822 [2024-07-26 18:33:51.743243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.822 [2024-07-26 18:33:51.743256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.822 [2024-07-26 18:33:51.743298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.822 qpair failed and we were unable to recover it. 00:33:25.822 [2024-07-26 18:33:51.752992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.822 [2024-07-26 18:33:51.753134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.822 [2024-07-26 18:33:51.753160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.822 [2024-07-26 18:33:51.753175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.822 [2024-07-26 18:33:51.753188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.822 [2024-07-26 18:33:51.753220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.822 qpair failed and we were unable to recover it. 00:33:25.822 [2024-07-26 18:33:51.763000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.822 [2024-07-26 18:33:51.763157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.822 [2024-07-26 18:33:51.763183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.823 [2024-07-26 18:33:51.763198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.823 [2024-07-26 18:33:51.763212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.823 [2024-07-26 18:33:51.763242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.823 qpair failed and we were unable to recover it. 00:33:25.823 [2024-07-26 18:33:51.773076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.823 [2024-07-26 18:33:51.773217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.823 [2024-07-26 18:33:51.773242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.823 [2024-07-26 18:33:51.773256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.823 [2024-07-26 18:33:51.773270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.823 [2024-07-26 18:33:51.773300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.823 qpair failed and we were unable to recover it. 00:33:25.823 [2024-07-26 18:33:51.783051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.823 [2024-07-26 18:33:51.783192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.823 [2024-07-26 18:33:51.783217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.823 [2024-07-26 18:33:51.783232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.823 [2024-07-26 18:33:51.783246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.823 [2024-07-26 18:33:51.783278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.823 qpair failed and we were unable to recover it. 00:33:25.823 [2024-07-26 18:33:51.793116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.823 [2024-07-26 18:33:51.793282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.823 [2024-07-26 18:33:51.793307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.823 [2024-07-26 18:33:51.793322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.823 [2024-07-26 18:33:51.793335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.823 [2024-07-26 18:33:51.793367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.823 qpair failed and we were unable to recover it. 00:33:25.823 [2024-07-26 18:33:51.803118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.823 [2024-07-26 18:33:51.803257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.823 [2024-07-26 18:33:51.803282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.823 [2024-07-26 18:33:51.803297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.823 [2024-07-26 18:33:51.803311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.823 [2024-07-26 18:33:51.803341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.823 qpair failed and we were unable to recover it. 00:33:25.823 [2024-07-26 18:33:51.813158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.823 [2024-07-26 18:33:51.813309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.823 [2024-07-26 18:33:51.813339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.823 [2024-07-26 18:33:51.813355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.823 [2024-07-26 18:33:51.813368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.823 [2024-07-26 18:33:51.813399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.823 qpair failed and we were unable to recover it. 00:33:25.823 [2024-07-26 18:33:51.823183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.823 [2024-07-26 18:33:51.823336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.823 [2024-07-26 18:33:51.823362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.823 [2024-07-26 18:33:51.823376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.823 [2024-07-26 18:33:51.823390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.823 [2024-07-26 18:33:51.823420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.823 qpair failed and we were unable to recover it. 00:33:25.823 [2024-07-26 18:33:51.833186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.823 [2024-07-26 18:33:51.833320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.823 [2024-07-26 18:33:51.833346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.823 [2024-07-26 18:33:51.833360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.823 [2024-07-26 18:33:51.833374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.823 [2024-07-26 18:33:51.833404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.823 qpair failed and we were unable to recover it. 00:33:25.823 [2024-07-26 18:33:51.843228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.823 [2024-07-26 18:33:51.843361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.823 [2024-07-26 18:33:51.843386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.823 [2024-07-26 18:33:51.843401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.823 [2024-07-26 18:33:51.843415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.823 [2024-07-26 18:33:51.843445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.823 qpair failed and we were unable to recover it. 00:33:25.823 [2024-07-26 18:33:51.853356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.823 [2024-07-26 18:33:51.853515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.823 [2024-07-26 18:33:51.853540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.823 [2024-07-26 18:33:51.853555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.823 [2024-07-26 18:33:51.853568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.823 [2024-07-26 18:33:51.853604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.823 qpair failed and we were unable to recover it. 00:33:25.823 [2024-07-26 18:33:51.863341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.824 [2024-07-26 18:33:51.863489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.824 [2024-07-26 18:33:51.863517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.824 [2024-07-26 18:33:51.863537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.824 [2024-07-26 18:33:51.863551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.824 [2024-07-26 18:33:51.863583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.824 qpair failed and we were unable to recover it. 00:33:25.824 [2024-07-26 18:33:51.873358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.824 [2024-07-26 18:33:51.873494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.824 [2024-07-26 18:33:51.873520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.824 [2024-07-26 18:33:51.873535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.824 [2024-07-26 18:33:51.873549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.824 [2024-07-26 18:33:51.873581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.824 qpair failed and we were unable to recover it. 00:33:25.824 [2024-07-26 18:33:51.883338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.824 [2024-07-26 18:33:51.883468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.824 [2024-07-26 18:33:51.883495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.824 [2024-07-26 18:33:51.883510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.824 [2024-07-26 18:33:51.883524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.824 [2024-07-26 18:33:51.883555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.824 qpair failed and we were unable to recover it. 00:33:25.824 [2024-07-26 18:33:51.893435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.824 [2024-07-26 18:33:51.893574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.824 [2024-07-26 18:33:51.893600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.824 [2024-07-26 18:33:51.893615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.824 [2024-07-26 18:33:51.893629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.824 [2024-07-26 18:33:51.893659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.824 qpair failed and we were unable to recover it. 00:33:25.824 [2024-07-26 18:33:51.903402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.824 [2024-07-26 18:33:51.903542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.824 [2024-07-26 18:33:51.903573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.824 [2024-07-26 18:33:51.903589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.824 [2024-07-26 18:33:51.903602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.824 [2024-07-26 18:33:51.903633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.824 qpair failed and we were unable to recover it. 00:33:25.824 [2024-07-26 18:33:51.913461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.824 [2024-07-26 18:33:51.913595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.824 [2024-07-26 18:33:51.913621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.824 [2024-07-26 18:33:51.913635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.824 [2024-07-26 18:33:51.913649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.824 [2024-07-26 18:33:51.913679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.824 qpair failed and we were unable to recover it. 00:33:25.824 [2024-07-26 18:33:51.923456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.824 [2024-07-26 18:33:51.923648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.824 [2024-07-26 18:33:51.923673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.824 [2024-07-26 18:33:51.923689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.824 [2024-07-26 18:33:51.923702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.824 [2024-07-26 18:33:51.923732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.824 qpair failed and we were unable to recover it. 00:33:25.824 [2024-07-26 18:33:51.933542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.824 [2024-07-26 18:33:51.933711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.824 [2024-07-26 18:33:51.933737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.824 [2024-07-26 18:33:51.933752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.824 [2024-07-26 18:33:51.933765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.824 [2024-07-26 18:33:51.933795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.824 qpair failed and we were unable to recover it. 00:33:25.824 [2024-07-26 18:33:51.943527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.824 [2024-07-26 18:33:51.943655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.824 [2024-07-26 18:33:51.943680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.824 [2024-07-26 18:33:51.943695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.824 [2024-07-26 18:33:51.943709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.824 [2024-07-26 18:33:51.943744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.824 qpair failed and we were unable to recover it. 00:33:25.824 [2024-07-26 18:33:51.953552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.824 [2024-07-26 18:33:51.953679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.824 [2024-07-26 18:33:51.953704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.824 [2024-07-26 18:33:51.953719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.824 [2024-07-26 18:33:51.953732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.824 [2024-07-26 18:33:51.953762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.824 qpair failed and we were unable to recover it. 00:33:25.824 [2024-07-26 18:33:51.963615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.824 [2024-07-26 18:33:51.963749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.825 [2024-07-26 18:33:51.963775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.825 [2024-07-26 18:33:51.963790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.825 [2024-07-26 18:33:51.963803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:25.825 [2024-07-26 18:33:51.963833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:25.825 qpair failed and we were unable to recover it. 00:33:26.085 [2024-07-26 18:33:51.973620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.085 [2024-07-26 18:33:51.973764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.085 [2024-07-26 18:33:51.973790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.085 [2024-07-26 18:33:51.973805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.085 [2024-07-26 18:33:51.973819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.085 [2024-07-26 18:33:51.973850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.085 qpair failed and we were unable to recover it. 00:33:26.085 [2024-07-26 18:33:51.983698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.085 [2024-07-26 18:33:51.983879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.085 [2024-07-26 18:33:51.983904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.085 [2024-07-26 18:33:51.983919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.085 [2024-07-26 18:33:51.983933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.085 [2024-07-26 18:33:51.983963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.085 qpair failed and we were unable to recover it. 00:33:26.085 [2024-07-26 18:33:51.993709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.085 [2024-07-26 18:33:51.993856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.085 [2024-07-26 18:33:51.993887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.085 [2024-07-26 18:33:51.993902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.085 [2024-07-26 18:33:51.993916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.085 [2024-07-26 18:33:51.993946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.085 qpair failed and we were unable to recover it. 00:33:26.085 [2024-07-26 18:33:52.003685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.085 [2024-07-26 18:33:52.003817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.085 [2024-07-26 18:33:52.003850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.085 [2024-07-26 18:33:52.003865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.085 [2024-07-26 18:33:52.003878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.085 [2024-07-26 18:33:52.003909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.085 qpair failed and we were unable to recover it. 00:33:26.085 [2024-07-26 18:33:52.013747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.085 [2024-07-26 18:33:52.013895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.085 [2024-07-26 18:33:52.013920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.085 [2024-07-26 18:33:52.013935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.085 [2024-07-26 18:33:52.013948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.085 [2024-07-26 18:33:52.013978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.085 qpair failed and we were unable to recover it. 00:33:26.085 [2024-07-26 18:33:52.023771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.085 [2024-07-26 18:33:52.023912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.085 [2024-07-26 18:33:52.023938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.085 [2024-07-26 18:33:52.023952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.085 [2024-07-26 18:33:52.023966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.085 [2024-07-26 18:33:52.023996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.085 qpair failed and we were unable to recover it. 00:33:26.085 [2024-07-26 18:33:52.033809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.085 [2024-07-26 18:33:52.033946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.085 [2024-07-26 18:33:52.033976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.085 [2024-07-26 18:33:52.033992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.085 [2024-07-26 18:33:52.034010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.085 [2024-07-26 18:33:52.034042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.085 qpair failed and we were unable to recover it. 00:33:26.085 [2024-07-26 18:33:52.043813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.085 [2024-07-26 18:33:52.043950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.085 [2024-07-26 18:33:52.043976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.085 [2024-07-26 18:33:52.043992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.085 [2024-07-26 18:33:52.044005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.085 [2024-07-26 18:33:52.044038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.085 qpair failed and we were unable to recover it. 00:33:26.085 [2024-07-26 18:33:52.053847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.085 [2024-07-26 18:33:52.054029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.085 [2024-07-26 18:33:52.054055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.085 [2024-07-26 18:33:52.054079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.085 [2024-07-26 18:33:52.054093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.086 [2024-07-26 18:33:52.054124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.086 qpair failed and we were unable to recover it. 00:33:26.086 [2024-07-26 18:33:52.063856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.086 [2024-07-26 18:33:52.063990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.086 [2024-07-26 18:33:52.064016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.086 [2024-07-26 18:33:52.064031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.086 [2024-07-26 18:33:52.064044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.086 [2024-07-26 18:33:52.064083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.086 qpair failed and we were unable to recover it. 00:33:26.086 [2024-07-26 18:33:52.073929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.086 [2024-07-26 18:33:52.074070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.086 [2024-07-26 18:33:52.074096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.086 [2024-07-26 18:33:52.074111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.086 [2024-07-26 18:33:52.074124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.086 [2024-07-26 18:33:52.074155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.086 qpair failed and we were unable to recover it. 00:33:26.086 [2024-07-26 18:33:52.083926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.086 [2024-07-26 18:33:52.084070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.086 [2024-07-26 18:33:52.084097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.086 [2024-07-26 18:33:52.084112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.086 [2024-07-26 18:33:52.084125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.086 [2024-07-26 18:33:52.084156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.086 qpair failed and we were unable to recover it. 00:33:26.086 [2024-07-26 18:33:52.093971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.086 [2024-07-26 18:33:52.094118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.086 [2024-07-26 18:33:52.094144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.086 [2024-07-26 18:33:52.094159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.086 [2024-07-26 18:33:52.094172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.086 [2024-07-26 18:33:52.094202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.086 qpair failed and we were unable to recover it. 00:33:26.086 [2024-07-26 18:33:52.103980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.086 [2024-07-26 18:33:52.104127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.086 [2024-07-26 18:33:52.104155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.086 [2024-07-26 18:33:52.104170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.086 [2024-07-26 18:33:52.104184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.086 [2024-07-26 18:33:52.104215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.086 qpair failed and we were unable to recover it. 00:33:26.086 [2024-07-26 18:33:52.114035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.086 [2024-07-26 18:33:52.114209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.086 [2024-07-26 18:33:52.114236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.086 [2024-07-26 18:33:52.114250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.086 [2024-07-26 18:33:52.114264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.086 [2024-07-26 18:33:52.114296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.086 qpair failed and we were unable to recover it. 00:33:26.086 [2024-07-26 18:33:52.124029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.086 [2024-07-26 18:33:52.124169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.086 [2024-07-26 18:33:52.124196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.086 [2024-07-26 18:33:52.124216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.086 [2024-07-26 18:33:52.124231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.086 [2024-07-26 18:33:52.124262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.086 qpair failed and we were unable to recover it. 00:33:26.086 [2024-07-26 18:33:52.134094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.086 [2024-07-26 18:33:52.134239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.086 [2024-07-26 18:33:52.134265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.086 [2024-07-26 18:33:52.134284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.086 [2024-07-26 18:33:52.134298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.086 [2024-07-26 18:33:52.134329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.086 qpair failed and we were unable to recover it. 00:33:26.086 [2024-07-26 18:33:52.144140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.086 [2024-07-26 18:33:52.144308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.086 [2024-07-26 18:33:52.144334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.086 [2024-07-26 18:33:52.144348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.086 [2024-07-26 18:33:52.144362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.086 [2024-07-26 18:33:52.144393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.086 qpair failed and we were unable to recover it. 00:33:26.086 [2024-07-26 18:33:52.154116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.086 [2024-07-26 18:33:52.154263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.086 [2024-07-26 18:33:52.154289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.086 [2024-07-26 18:33:52.154305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.086 [2024-07-26 18:33:52.154318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.086 [2024-07-26 18:33:52.154349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.086 qpair failed and we were unable to recover it. 00:33:26.086 [2024-07-26 18:33:52.164183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.086 [2024-07-26 18:33:52.164335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.086 [2024-07-26 18:33:52.164360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.086 [2024-07-26 18:33:52.164375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.086 [2024-07-26 18:33:52.164389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.086 [2024-07-26 18:33:52.164419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.086 qpair failed and we were unable to recover it. 00:33:26.086 [2024-07-26 18:33:52.174192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.086 [2024-07-26 18:33:52.174334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.086 [2024-07-26 18:33:52.174359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.086 [2024-07-26 18:33:52.174374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.086 [2024-07-26 18:33:52.174387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.087 [2024-07-26 18:33:52.174418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.087 qpair failed and we were unable to recover it. 00:33:26.087 [2024-07-26 18:33:52.184211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.087 [2024-07-26 18:33:52.184358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.087 [2024-07-26 18:33:52.184383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.087 [2024-07-26 18:33:52.184398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.087 [2024-07-26 18:33:52.184411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.087 [2024-07-26 18:33:52.184441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.087 qpair failed and we were unable to recover it. 00:33:26.087 [2024-07-26 18:33:52.194246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.087 [2024-07-26 18:33:52.194383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.087 [2024-07-26 18:33:52.194409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.087 [2024-07-26 18:33:52.194424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.087 [2024-07-26 18:33:52.194437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.087 [2024-07-26 18:33:52.194466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.087 qpair failed and we were unable to recover it. 00:33:26.087 [2024-07-26 18:33:52.204279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.087 [2024-07-26 18:33:52.204410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.087 [2024-07-26 18:33:52.204436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.087 [2024-07-26 18:33:52.204451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.087 [2024-07-26 18:33:52.204465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.087 [2024-07-26 18:33:52.204495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.087 qpair failed and we were unable to recover it. 00:33:26.087 [2024-07-26 18:33:52.214329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.087 [2024-07-26 18:33:52.214469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.087 [2024-07-26 18:33:52.214494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.087 [2024-07-26 18:33:52.214515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.087 [2024-07-26 18:33:52.214529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.087 [2024-07-26 18:33:52.214560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.087 qpair failed and we were unable to recover it. 00:33:26.087 [2024-07-26 18:33:52.224313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.087 [2024-07-26 18:33:52.224451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.087 [2024-07-26 18:33:52.224477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.087 [2024-07-26 18:33:52.224491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.087 [2024-07-26 18:33:52.224504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.087 [2024-07-26 18:33:52.224534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.087 qpair failed and we were unable to recover it. 00:33:26.347 [2024-07-26 18:33:52.234351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.347 [2024-07-26 18:33:52.234500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.347 [2024-07-26 18:33:52.234526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.347 [2024-07-26 18:33:52.234541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.347 [2024-07-26 18:33:52.234554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.347 [2024-07-26 18:33:52.234584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.347 qpair failed and we were unable to recover it. 00:33:26.347 [2024-07-26 18:33:52.244370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.347 [2024-07-26 18:33:52.244528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.347 [2024-07-26 18:33:52.244552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.347 [2024-07-26 18:33:52.244565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.347 [2024-07-26 18:33:52.244578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.347 [2024-07-26 18:33:52.244607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.347 qpair failed and we were unable to recover it. 00:33:26.347 [2024-07-26 18:33:52.254436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.347 [2024-07-26 18:33:52.254576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.347 [2024-07-26 18:33:52.254601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.347 [2024-07-26 18:33:52.254616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.347 [2024-07-26 18:33:52.254630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.347 [2024-07-26 18:33:52.254661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.347 qpair failed and we were unable to recover it. 00:33:26.347 [2024-07-26 18:33:52.264424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.347 [2024-07-26 18:33:52.264561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.347 [2024-07-26 18:33:52.264587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.347 [2024-07-26 18:33:52.264602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.347 [2024-07-26 18:33:52.264615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.347 [2024-07-26 18:33:52.264645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.347 qpair failed and we were unable to recover it. 00:33:26.347 [2024-07-26 18:33:52.274484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.347 [2024-07-26 18:33:52.274612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.347 [2024-07-26 18:33:52.274637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.347 [2024-07-26 18:33:52.274652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.347 [2024-07-26 18:33:52.274666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.347 [2024-07-26 18:33:52.274695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.347 qpair failed and we were unable to recover it. 00:33:26.347 [2024-07-26 18:33:52.284526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.347 [2024-07-26 18:33:52.284663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.347 [2024-07-26 18:33:52.284688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.347 [2024-07-26 18:33:52.284703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.347 [2024-07-26 18:33:52.284716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.347 [2024-07-26 18:33:52.284747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.347 qpair failed and we were unable to recover it. 00:33:26.347 [2024-07-26 18:33:52.294563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.347 [2024-07-26 18:33:52.294749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.347 [2024-07-26 18:33:52.294775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.347 [2024-07-26 18:33:52.294789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.347 [2024-07-26 18:33:52.294803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.347 [2024-07-26 18:33:52.294834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.347 qpair failed and we were unable to recover it. 00:33:26.347 [2024-07-26 18:33:52.304537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.348 [2024-07-26 18:33:52.304694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.348 [2024-07-26 18:33:52.304725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.348 [2024-07-26 18:33:52.304740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.348 [2024-07-26 18:33:52.304754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.348 [2024-07-26 18:33:52.304784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.348 qpair failed and we were unable to recover it. 00:33:26.348 [2024-07-26 18:33:52.314595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.348 [2024-07-26 18:33:52.314726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.348 [2024-07-26 18:33:52.314752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.348 [2024-07-26 18:33:52.314767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.348 [2024-07-26 18:33:52.314780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.348 [2024-07-26 18:33:52.314810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.348 qpair failed and we were unable to recover it. 00:33:26.348 [2024-07-26 18:33:52.324590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.348 [2024-07-26 18:33:52.324720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.348 [2024-07-26 18:33:52.324746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.348 [2024-07-26 18:33:52.324760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.348 [2024-07-26 18:33:52.324774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.348 [2024-07-26 18:33:52.324815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.348 qpair failed and we were unable to recover it. 00:33:26.348 [2024-07-26 18:33:52.334630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.348 [2024-07-26 18:33:52.334771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.348 [2024-07-26 18:33:52.334797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.348 [2024-07-26 18:33:52.334812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.348 [2024-07-26 18:33:52.334825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.348 [2024-07-26 18:33:52.334856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.348 qpair failed and we were unable to recover it. 00:33:26.348 [2024-07-26 18:33:52.344672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.348 [2024-07-26 18:33:52.344845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.348 [2024-07-26 18:33:52.344871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.348 [2024-07-26 18:33:52.344886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.348 [2024-07-26 18:33:52.344900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.348 [2024-07-26 18:33:52.344935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.348 qpair failed and we were unable to recover it. 00:33:26.348 [2024-07-26 18:33:52.354729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.348 [2024-07-26 18:33:52.354870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.348 [2024-07-26 18:33:52.354896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.348 [2024-07-26 18:33:52.354910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.348 [2024-07-26 18:33:52.354924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.348 [2024-07-26 18:33:52.354954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.348 qpair failed and we were unable to recover it. 00:33:26.348 [2024-07-26 18:33:52.364712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.348 [2024-07-26 18:33:52.364841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.348 [2024-07-26 18:33:52.364867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.348 [2024-07-26 18:33:52.364882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.348 [2024-07-26 18:33:52.364895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.348 [2024-07-26 18:33:52.364926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.348 qpair failed and we were unable to recover it. 00:33:26.348 [2024-07-26 18:33:52.374765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.348 [2024-07-26 18:33:52.374954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.348 [2024-07-26 18:33:52.374981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.348 [2024-07-26 18:33:52.374997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.348 [2024-07-26 18:33:52.375011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.348 [2024-07-26 18:33:52.375042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.348 qpair failed and we were unable to recover it. 00:33:26.348 [2024-07-26 18:33:52.384828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.348 [2024-07-26 18:33:52.385003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.348 [2024-07-26 18:33:52.385029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.348 [2024-07-26 18:33:52.385044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.348 [2024-07-26 18:33:52.385064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.348 [2024-07-26 18:33:52.385096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.348 qpair failed and we were unable to recover it. 00:33:26.348 [2024-07-26 18:33:52.394826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.348 [2024-07-26 18:33:52.395000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.348 [2024-07-26 18:33:52.395033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.348 [2024-07-26 18:33:52.395049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.348 [2024-07-26 18:33:52.395072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.348 [2024-07-26 18:33:52.395106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.348 qpair failed and we were unable to recover it. 00:33:26.348 [2024-07-26 18:33:52.404845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.348 [2024-07-26 18:33:52.404985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.348 [2024-07-26 18:33:52.405010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.348 [2024-07-26 18:33:52.405025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.348 [2024-07-26 18:33:52.405038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.348 [2024-07-26 18:33:52.405087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.348 qpair failed and we were unable to recover it. 00:33:26.348 [2024-07-26 18:33:52.414873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.348 [2024-07-26 18:33:52.415038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.349 [2024-07-26 18:33:52.415071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.349 [2024-07-26 18:33:52.415088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.349 [2024-07-26 18:33:52.415102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.349 [2024-07-26 18:33:52.415132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.349 qpair failed and we were unable to recover it. 00:33:26.349 [2024-07-26 18:33:52.425015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.349 [2024-07-26 18:33:52.425168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.349 [2024-07-26 18:33:52.425194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.349 [2024-07-26 18:33:52.425209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.349 [2024-07-26 18:33:52.425223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.349 [2024-07-26 18:33:52.425253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.349 qpair failed and we were unable to recover it. 00:33:26.349 [2024-07-26 18:33:52.434961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.349 [2024-07-26 18:33:52.435097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.349 [2024-07-26 18:33:52.435131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.349 [2024-07-26 18:33:52.435147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.349 [2024-07-26 18:33:52.435166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.349 [2024-07-26 18:33:52.435200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.349 qpair failed and we were unable to recover it. 00:33:26.349 [2024-07-26 18:33:52.445071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.349 [2024-07-26 18:33:52.445241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.349 [2024-07-26 18:33:52.445266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.349 [2024-07-26 18:33:52.445281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.349 [2024-07-26 18:33:52.445294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.349 [2024-07-26 18:33:52.445326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.349 qpair failed and we were unable to recover it. 00:33:26.349 [2024-07-26 18:33:52.455050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.349 [2024-07-26 18:33:52.455207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.349 [2024-07-26 18:33:52.455232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.349 [2024-07-26 18:33:52.455248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.349 [2024-07-26 18:33:52.455261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.349 [2024-07-26 18:33:52.455291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.349 qpair failed and we were unable to recover it. 00:33:26.349 [2024-07-26 18:33:52.465049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.349 [2024-07-26 18:33:52.465189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.349 [2024-07-26 18:33:52.465215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.349 [2024-07-26 18:33:52.465229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.349 [2024-07-26 18:33:52.465242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.349 [2024-07-26 18:33:52.465274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.349 qpair failed and we were unable to recover it. 00:33:26.349 [2024-07-26 18:33:52.475120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.349 [2024-07-26 18:33:52.475259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.349 [2024-07-26 18:33:52.475285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.349 [2024-07-26 18:33:52.475300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.349 [2024-07-26 18:33:52.475314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.349 [2024-07-26 18:33:52.475356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.349 qpair failed and we were unable to recover it. 00:33:26.349 [2024-07-26 18:33:52.485125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.349 [2024-07-26 18:33:52.485271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.349 [2024-07-26 18:33:52.485296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.349 [2024-07-26 18:33:52.485311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.349 [2024-07-26 18:33:52.485325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.349 [2024-07-26 18:33:52.485356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.349 qpair failed and we were unable to recover it. 00:33:26.609 [2024-07-26 18:33:52.495117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.609 [2024-07-26 18:33:52.495253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.609 [2024-07-26 18:33:52.495279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.609 [2024-07-26 18:33:52.495294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.609 [2024-07-26 18:33:52.495308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.609 [2024-07-26 18:33:52.495352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.609 qpair failed and we were unable to recover it. 00:33:26.609 [2024-07-26 18:33:52.505120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.609 [2024-07-26 18:33:52.505256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.609 [2024-07-26 18:33:52.505282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.609 [2024-07-26 18:33:52.505297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.609 [2024-07-26 18:33:52.505313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.609 [2024-07-26 18:33:52.505345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.609 qpair failed and we were unable to recover it. 00:33:26.609 [2024-07-26 18:33:52.515203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.609 [2024-07-26 18:33:52.515378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.609 [2024-07-26 18:33:52.515406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.609 [2024-07-26 18:33:52.515422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.609 [2024-07-26 18:33:52.515435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.609 [2024-07-26 18:33:52.515467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.609 qpair failed and we were unable to recover it. 00:33:26.609 [2024-07-26 18:33:52.525190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.609 [2024-07-26 18:33:52.525322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.609 [2024-07-26 18:33:52.525348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.609 [2024-07-26 18:33:52.525379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.609 [2024-07-26 18:33:52.525394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.609 [2024-07-26 18:33:52.525425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.609 qpair failed and we were unable to recover it. 00:33:26.609 [2024-07-26 18:33:52.535224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.609 [2024-07-26 18:33:52.535363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.609 [2024-07-26 18:33:52.535389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.609 [2024-07-26 18:33:52.535404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.609 [2024-07-26 18:33:52.535417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.609 [2024-07-26 18:33:52.535448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.609 qpair failed and we were unable to recover it. 00:33:26.609 [2024-07-26 18:33:52.545277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.609 [2024-07-26 18:33:52.545424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.609 [2024-07-26 18:33:52.545449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.609 [2024-07-26 18:33:52.545464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.609 [2024-07-26 18:33:52.545478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.609 [2024-07-26 18:33:52.545508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.609 qpair failed and we were unable to recover it. 00:33:26.609 [2024-07-26 18:33:52.555303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.609 [2024-07-26 18:33:52.555461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.609 [2024-07-26 18:33:52.555487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.609 [2024-07-26 18:33:52.555502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.609 [2024-07-26 18:33:52.555515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.609 [2024-07-26 18:33:52.555545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.609 qpair failed and we were unable to recover it. 00:33:26.609 [2024-07-26 18:33:52.565278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.610 [2024-07-26 18:33:52.565431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.610 [2024-07-26 18:33:52.565457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.610 [2024-07-26 18:33:52.565472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.610 [2024-07-26 18:33:52.565486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.610 [2024-07-26 18:33:52.565517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.610 qpair failed and we were unable to recover it. 00:33:26.610 [2024-07-26 18:33:52.575405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.610 [2024-07-26 18:33:52.575593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.610 [2024-07-26 18:33:52.575619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.610 [2024-07-26 18:33:52.575635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.610 [2024-07-26 18:33:52.575648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.610 [2024-07-26 18:33:52.575678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.610 qpair failed and we were unable to recover it. 00:33:26.610 [2024-07-26 18:33:52.585399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.610 [2024-07-26 18:33:52.585554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.610 [2024-07-26 18:33:52.585579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.610 [2024-07-26 18:33:52.585594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.610 [2024-07-26 18:33:52.585608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.610 [2024-07-26 18:33:52.585639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.610 qpair failed and we were unable to recover it. 00:33:26.610 [2024-07-26 18:33:52.595379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.610 [2024-07-26 18:33:52.595517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.610 [2024-07-26 18:33:52.595542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.610 [2024-07-26 18:33:52.595557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.610 [2024-07-26 18:33:52.595570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.610 [2024-07-26 18:33:52.595601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.610 qpair failed and we were unable to recover it. 00:33:26.610 [2024-07-26 18:33:52.605425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.610 [2024-07-26 18:33:52.605563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.610 [2024-07-26 18:33:52.605589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.610 [2024-07-26 18:33:52.605604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.610 [2024-07-26 18:33:52.605617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.610 [2024-07-26 18:33:52.605647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.610 qpair failed and we were unable to recover it. 00:33:26.610 [2024-07-26 18:33:52.615419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.610 [2024-07-26 18:33:52.615605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.610 [2024-07-26 18:33:52.615630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.610 [2024-07-26 18:33:52.615651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.610 [2024-07-26 18:33:52.615666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.610 [2024-07-26 18:33:52.615698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.610 qpair failed and we were unable to recover it. 00:33:26.610 [2024-07-26 18:33:52.625478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.610 [2024-07-26 18:33:52.625663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.610 [2024-07-26 18:33:52.625690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.610 [2024-07-26 18:33:52.625705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.610 [2024-07-26 18:33:52.625718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.610 [2024-07-26 18:33:52.625750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.610 qpair failed and we were unable to recover it. 00:33:26.610 [2024-07-26 18:33:52.635530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.610 [2024-07-26 18:33:52.635690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.610 [2024-07-26 18:33:52.635718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.610 [2024-07-26 18:33:52.635733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.610 [2024-07-26 18:33:52.635747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.610 [2024-07-26 18:33:52.635778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.610 qpair failed and we were unable to recover it. 00:33:26.610 [2024-07-26 18:33:52.645511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.610 [2024-07-26 18:33:52.645644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.610 [2024-07-26 18:33:52.645670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.610 [2024-07-26 18:33:52.645685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.610 [2024-07-26 18:33:52.645698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.610 [2024-07-26 18:33:52.645729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.610 qpair failed and we were unable to recover it. 00:33:26.610 [2024-07-26 18:33:52.655570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.610 [2024-07-26 18:33:52.655706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.610 [2024-07-26 18:33:52.655731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.610 [2024-07-26 18:33:52.655747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.610 [2024-07-26 18:33:52.655760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.610 [2024-07-26 18:33:52.655791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.610 qpair failed and we were unable to recover it. 00:33:26.610 [2024-07-26 18:33:52.665595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.610 [2024-07-26 18:33:52.665728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.610 [2024-07-26 18:33:52.665753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.610 [2024-07-26 18:33:52.665768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.610 [2024-07-26 18:33:52.665782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.610 [2024-07-26 18:33:52.665815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.610 qpair failed and we were unable to recover it. 00:33:26.610 [2024-07-26 18:33:52.675619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.610 [2024-07-26 18:33:52.675800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.610 [2024-07-26 18:33:52.675827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.610 [2024-07-26 18:33:52.675841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.610 [2024-07-26 18:33:52.675855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.610 [2024-07-26 18:33:52.675897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.610 qpair failed and we were unable to recover it. 00:33:26.610 [2024-07-26 18:33:52.685606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.611 [2024-07-26 18:33:52.685768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.611 [2024-07-26 18:33:52.685794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.611 [2024-07-26 18:33:52.685808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.611 [2024-07-26 18:33:52.685822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.611 [2024-07-26 18:33:52.685852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.611 qpair failed and we were unable to recover it. 00:33:26.611 [2024-07-26 18:33:52.695642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.611 [2024-07-26 18:33:52.695783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.611 [2024-07-26 18:33:52.695809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.611 [2024-07-26 18:33:52.695824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.611 [2024-07-26 18:33:52.695837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.611 [2024-07-26 18:33:52.695867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.611 qpair failed and we were unable to recover it. 00:33:26.611 [2024-07-26 18:33:52.705759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.611 [2024-07-26 18:33:52.705929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.611 [2024-07-26 18:33:52.705960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.611 [2024-07-26 18:33:52.705976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.611 [2024-07-26 18:33:52.705989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.611 [2024-07-26 18:33:52.706030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.611 qpair failed and we were unable to recover it. 00:33:26.611 [2024-07-26 18:33:52.715741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.611 [2024-07-26 18:33:52.715886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.611 [2024-07-26 18:33:52.715912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.611 [2024-07-26 18:33:52.715927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.611 [2024-07-26 18:33:52.715940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.611 [2024-07-26 18:33:52.715972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.611 qpair failed and we were unable to recover it. 00:33:26.611 [2024-07-26 18:33:52.725741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.611 [2024-07-26 18:33:52.725875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.611 [2024-07-26 18:33:52.725901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.611 [2024-07-26 18:33:52.725915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.611 [2024-07-26 18:33:52.725929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.611 [2024-07-26 18:33:52.725959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.611 qpair failed and we were unable to recover it. 00:33:26.611 [2024-07-26 18:33:52.735810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.611 [2024-07-26 18:33:52.735988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.611 [2024-07-26 18:33:52.736014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.611 [2024-07-26 18:33:52.736029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.611 [2024-07-26 18:33:52.736042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.611 [2024-07-26 18:33:52.736079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.611 qpair failed and we were unable to recover it. 00:33:26.611 [2024-07-26 18:33:52.745839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.611 [2024-07-26 18:33:52.745982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.611 [2024-07-26 18:33:52.746011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.611 [2024-07-26 18:33:52.746026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.611 [2024-07-26 18:33:52.746041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.611 [2024-07-26 18:33:52.746087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.611 qpair failed and we were unable to recover it. 00:33:26.870 [2024-07-26 18:33:52.755843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.870 [2024-07-26 18:33:52.755980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-07-26 18:33:52.756005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-07-26 18:33:52.756021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-07-26 18:33:52.756034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.871 [2024-07-26 18:33:52.756073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.871 [2024-07-26 18:33:52.765880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-07-26 18:33:52.766023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-07-26 18:33:52.766049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-07-26 18:33:52.766071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-07-26 18:33:52.766086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.871 [2024-07-26 18:33:52.766117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.871 [2024-07-26 18:33:52.775903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-07-26 18:33:52.776043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-07-26 18:33:52.776079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-07-26 18:33:52.776095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-07-26 18:33:52.776109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.871 [2024-07-26 18:33:52.776138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.871 [2024-07-26 18:33:52.785927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-07-26 18:33:52.786071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-07-26 18:33:52.786097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-07-26 18:33:52.786112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-07-26 18:33:52.786126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.871 [2024-07-26 18:33:52.786156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.871 [2024-07-26 18:33:52.795942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-07-26 18:33:52.796094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-07-26 18:33:52.796125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-07-26 18:33:52.796141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-07-26 18:33:52.796154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.871 [2024-07-26 18:33:52.796185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.871 [2024-07-26 18:33:52.805980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-07-26 18:33:52.806121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-07-26 18:33:52.806146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-07-26 18:33:52.806161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-07-26 18:33:52.806174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.871 [2024-07-26 18:33:52.806206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.871 [2024-07-26 18:33:52.815998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-07-26 18:33:52.816144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-07-26 18:33:52.816170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-07-26 18:33:52.816185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-07-26 18:33:52.816199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.871 [2024-07-26 18:33:52.816229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.871 [2024-07-26 18:33:52.826088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-07-26 18:33:52.826226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-07-26 18:33:52.826251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-07-26 18:33:52.826266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-07-26 18:33:52.826280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.871 [2024-07-26 18:33:52.826311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.871 [2024-07-26 18:33:52.836181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-07-26 18:33:52.836320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-07-26 18:33:52.836346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-07-26 18:33:52.836361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-07-26 18:33:52.836380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.871 [2024-07-26 18:33:52.836424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.871 [2024-07-26 18:33:52.846098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-07-26 18:33:52.846230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-07-26 18:33:52.846256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-07-26 18:33:52.846271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-07-26 18:33:52.846285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.871 [2024-07-26 18:33:52.846315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.871 [2024-07-26 18:33:52.856138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-07-26 18:33:52.856275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-07-26 18:33:52.856300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-07-26 18:33:52.856316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-07-26 18:33:52.856329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.871 [2024-07-26 18:33:52.856360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.871 [2024-07-26 18:33:52.866157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-07-26 18:33:52.866293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-07-26 18:33:52.866318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-07-26 18:33:52.866333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-07-26 18:33:52.866347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.871 [2024-07-26 18:33:52.866378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.872 [2024-07-26 18:33:52.876176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-07-26 18:33:52.876310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-07-26 18:33:52.876336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-07-26 18:33:52.876351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-07-26 18:33:52.876365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.872 [2024-07-26 18:33:52.876394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-07-26 18:33:52.886188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-07-26 18:33:52.886322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-07-26 18:33:52.886348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-07-26 18:33:52.886363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-07-26 18:33:52.886376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.872 [2024-07-26 18:33:52.886406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-07-26 18:33:52.896335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-07-26 18:33:52.896472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-07-26 18:33:52.896499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-07-26 18:33:52.896513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-07-26 18:33:52.896527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.872 [2024-07-26 18:33:52.896569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-07-26 18:33:52.906313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-07-26 18:33:52.906450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-07-26 18:33:52.906475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-07-26 18:33:52.906490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-07-26 18:33:52.906503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.872 [2024-07-26 18:33:52.906533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-07-26 18:33:52.916276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-07-26 18:33:52.916415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-07-26 18:33:52.916441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-07-26 18:33:52.916456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-07-26 18:33:52.916469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.872 [2024-07-26 18:33:52.916499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-07-26 18:33:52.926306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-07-26 18:33:52.926436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-07-26 18:33:52.926461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-07-26 18:33:52.926476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-07-26 18:33:52.926495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.872 [2024-07-26 18:33:52.926527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-07-26 18:33:52.936339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-07-26 18:33:52.936493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-07-26 18:33:52.936528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-07-26 18:33:52.936543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-07-26 18:33:52.936556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.872 [2024-07-26 18:33:52.936588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-07-26 18:33:52.946347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-07-26 18:33:52.946484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-07-26 18:33:52.946509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-07-26 18:33:52.946524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-07-26 18:33:52.946537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.872 [2024-07-26 18:33:52.946567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-07-26 18:33:52.956415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-07-26 18:33:52.956571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-07-26 18:33:52.956600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-07-26 18:33:52.956615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-07-26 18:33:52.956628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.872 [2024-07-26 18:33:52.956659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-07-26 18:33:52.966429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-07-26 18:33:52.966606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-07-26 18:33:52.966632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-07-26 18:33:52.966647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-07-26 18:33:52.966661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.872 [2024-07-26 18:33:52.966691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-07-26 18:33:52.976530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-07-26 18:33:52.976679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-07-26 18:33:52.976704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-07-26 18:33:52.976719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-07-26 18:33:52.976733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.872 [2024-07-26 18:33:52.976764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-07-26 18:33:52.986465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-07-26 18:33:52.986598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-07-26 18:33:52.986623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-07-26 18:33:52.986638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-07-26 18:33:52.986653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.872 [2024-07-26 18:33:52.986682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.873 [2024-07-26 18:33:52.996542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.873 [2024-07-26 18:33:52.996695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.873 [2024-07-26 18:33:52.996721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.873 [2024-07-26 18:33:52.996736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.873 [2024-07-26 18:33:52.996750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.873 [2024-07-26 18:33:52.996780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.873 qpair failed and we were unable to recover it. 00:33:26.873 [2024-07-26 18:33:53.006602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.873 [2024-07-26 18:33:53.006774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.873 [2024-07-26 18:33:53.006800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.873 [2024-07-26 18:33:53.006815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.873 [2024-07-26 18:33:53.006828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:26.873 [2024-07-26 18:33:53.006860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.873 qpair failed and we were unable to recover it. 00:33:27.134 [2024-07-26 18:33:53.016587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.134 [2024-07-26 18:33:53.016723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.134 [2024-07-26 18:33:53.016749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.134 [2024-07-26 18:33:53.016770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.134 [2024-07-26 18:33:53.016785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.134 [2024-07-26 18:33:53.016815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.134 qpair failed and we were unable to recover it. 00:33:27.134 [2024-07-26 18:33:53.026616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.134 [2024-07-26 18:33:53.026760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.134 [2024-07-26 18:33:53.026787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.134 [2024-07-26 18:33:53.026802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.134 [2024-07-26 18:33:53.026816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.134 [2024-07-26 18:33:53.026847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.134 qpair failed and we were unable to recover it. 00:33:27.134 [2024-07-26 18:33:53.036652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.134 [2024-07-26 18:33:53.036790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.134 [2024-07-26 18:33:53.036826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.134 [2024-07-26 18:33:53.036842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.134 [2024-07-26 18:33:53.036856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.134 [2024-07-26 18:33:53.036886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.134 qpair failed and we were unable to recover it. 00:33:27.134 [2024-07-26 18:33:53.046671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.135 [2024-07-26 18:33:53.046816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.135 [2024-07-26 18:33:53.046842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.135 [2024-07-26 18:33:53.046857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.135 [2024-07-26 18:33:53.046871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.135 [2024-07-26 18:33:53.046903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.135 qpair failed and we were unable to recover it. 00:33:27.135 [2024-07-26 18:33:53.056774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.135 [2024-07-26 18:33:53.056912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.135 [2024-07-26 18:33:53.056938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.135 [2024-07-26 18:33:53.056953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.135 [2024-07-26 18:33:53.056967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.135 [2024-07-26 18:33:53.057009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.135 qpair failed and we were unable to recover it. 00:33:27.135 [2024-07-26 18:33:53.066711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.135 [2024-07-26 18:33:53.066850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.135 [2024-07-26 18:33:53.066876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.135 [2024-07-26 18:33:53.066891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.135 [2024-07-26 18:33:53.066904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.135 [2024-07-26 18:33:53.066934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.135 qpair failed and we were unable to recover it. 00:33:27.135 [2024-07-26 18:33:53.076726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.135 [2024-07-26 18:33:53.076857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.135 [2024-07-26 18:33:53.076883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.135 [2024-07-26 18:33:53.076899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.135 [2024-07-26 18:33:53.076912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.135 [2024-07-26 18:33:53.076944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.135 qpair failed and we were unable to recover it. 00:33:27.135 [2024-07-26 18:33:53.086808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.135 [2024-07-26 18:33:53.086974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.135 [2024-07-26 18:33:53.087011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.135 [2024-07-26 18:33:53.087025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.135 [2024-07-26 18:33:53.087039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.135 [2024-07-26 18:33:53.087078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.135 qpair failed and we were unable to recover it. 00:33:27.135 [2024-07-26 18:33:53.096805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.135 [2024-07-26 18:33:53.096942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.135 [2024-07-26 18:33:53.096967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.135 [2024-07-26 18:33:53.096982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.135 [2024-07-26 18:33:53.096995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.135 [2024-07-26 18:33:53.097025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.135 qpair failed and we were unable to recover it. 00:33:27.135 [2024-07-26 18:33:53.106822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.135 [2024-07-26 18:33:53.106995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.135 [2024-07-26 18:33:53.107029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.135 [2024-07-26 18:33:53.107045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.135 [2024-07-26 18:33:53.107065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.135 [2024-07-26 18:33:53.107097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.135 qpair failed and we were unable to recover it. 00:33:27.135 [2024-07-26 18:33:53.116838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.135 [2024-07-26 18:33:53.116967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.135 [2024-07-26 18:33:53.116993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.135 [2024-07-26 18:33:53.117008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.135 [2024-07-26 18:33:53.117022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.135 [2024-07-26 18:33:53.117071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.135 qpair failed and we were unable to recover it. 00:33:27.135 [2024-07-26 18:33:53.127016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.135 [2024-07-26 18:33:53.127206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.135 [2024-07-26 18:33:53.127232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.135 [2024-07-26 18:33:53.127248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.135 [2024-07-26 18:33:53.127263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.135 [2024-07-26 18:33:53.127307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.135 qpair failed and we were unable to recover it. 00:33:27.135 [2024-07-26 18:33:53.136945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.135 [2024-07-26 18:33:53.137133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.135 [2024-07-26 18:33:53.137159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.135 [2024-07-26 18:33:53.137174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.135 [2024-07-26 18:33:53.137187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.135 [2024-07-26 18:33:53.137218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.135 qpair failed and we were unable to recover it. 00:33:27.135 [2024-07-26 18:33:53.146943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.135 [2024-07-26 18:33:53.147093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.135 [2024-07-26 18:33:53.147122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.135 [2024-07-26 18:33:53.147137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.135 [2024-07-26 18:33:53.147150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.135 [2024-07-26 18:33:53.147187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.135 qpair failed and we were unable to recover it. 00:33:27.135 [2024-07-26 18:33:53.156963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.135 [2024-07-26 18:33:53.157113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.135 [2024-07-26 18:33:53.157140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.135 [2024-07-26 18:33:53.157154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.135 [2024-07-26 18:33:53.157168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.135 [2024-07-26 18:33:53.157198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.135 qpair failed and we were unable to recover it. 00:33:27.135 [2024-07-26 18:33:53.167000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-07-26 18:33:53.167146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-07-26 18:33:53.167172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-07-26 18:33:53.167187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-07-26 18:33:53.167200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.136 [2024-07-26 18:33:53.167231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-07-26 18:33:53.177028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-07-26 18:33:53.177171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-07-26 18:33:53.177197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-07-26 18:33:53.177212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-07-26 18:33:53.177226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.136 [2024-07-26 18:33:53.177257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-07-26 18:33:53.187074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-07-26 18:33:53.187213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-07-26 18:33:53.187239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-07-26 18:33:53.187254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-07-26 18:33:53.187267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.136 [2024-07-26 18:33:53.187297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-07-26 18:33:53.197090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-07-26 18:33:53.197225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-07-26 18:33:53.197256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-07-26 18:33:53.197271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-07-26 18:33:53.197284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.136 [2024-07-26 18:33:53.197314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-07-26 18:33:53.207096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-07-26 18:33:53.207233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-07-26 18:33:53.207259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-07-26 18:33:53.207274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-07-26 18:33:53.207288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.136 [2024-07-26 18:33:53.207319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-07-26 18:33:53.217159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-07-26 18:33:53.217297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-07-26 18:33:53.217326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-07-26 18:33:53.217344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-07-26 18:33:53.217358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.136 [2024-07-26 18:33:53.217390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-07-26 18:33:53.227189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-07-26 18:33:53.227332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-07-26 18:33:53.227358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-07-26 18:33:53.227378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-07-26 18:33:53.227393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.136 [2024-07-26 18:33:53.227426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-07-26 18:33:53.237212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-07-26 18:33:53.237343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-07-26 18:33:53.237370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-07-26 18:33:53.237386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-07-26 18:33:53.237399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.136 [2024-07-26 18:33:53.237435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-07-26 18:33:53.247244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-07-26 18:33:53.247377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-07-26 18:33:53.247402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-07-26 18:33:53.247416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-07-26 18:33:53.247429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.136 [2024-07-26 18:33:53.247458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-07-26 18:33:53.257252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-07-26 18:33:53.257395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-07-26 18:33:53.257421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-07-26 18:33:53.257436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-07-26 18:33:53.257449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.136 [2024-07-26 18:33:53.257480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-07-26 18:33:53.267323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-07-26 18:33:53.267470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-07-26 18:33:53.267497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-07-26 18:33:53.267512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-07-26 18:33:53.267525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.136 [2024-07-26 18:33:53.267567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.397 [2024-07-26 18:33:53.277298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.397 [2024-07-26 18:33:53.277445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.397 [2024-07-26 18:33:53.277472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.397 [2024-07-26 18:33:53.277487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.397 [2024-07-26 18:33:53.277501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.397 [2024-07-26 18:33:53.277533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.397 qpair failed and we were unable to recover it. 00:33:27.397 [2024-07-26 18:33:53.287433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.398 [2024-07-26 18:33:53.287574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.398 [2024-07-26 18:33:53.287600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.398 [2024-07-26 18:33:53.287615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.398 [2024-07-26 18:33:53.287628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.398 [2024-07-26 18:33:53.287659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.398 qpair failed and we were unable to recover it. 00:33:27.398 [2024-07-26 18:33:53.297349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.398 [2024-07-26 18:33:53.297487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.398 [2024-07-26 18:33:53.297512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.398 [2024-07-26 18:33:53.297527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.398 [2024-07-26 18:33:53.297540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.398 [2024-07-26 18:33:53.297570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.398 qpair failed and we were unable to recover it. 00:33:27.398 [2024-07-26 18:33:53.307437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.398 [2024-07-26 18:33:53.307579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.398 [2024-07-26 18:33:53.307605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.398 [2024-07-26 18:33:53.307620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.398 [2024-07-26 18:33:53.307633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.398 [2024-07-26 18:33:53.307663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.398 qpair failed and we were unable to recover it. 00:33:27.398 [2024-07-26 18:33:53.317452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.398 [2024-07-26 18:33:53.317580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.398 [2024-07-26 18:33:53.317606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.398 [2024-07-26 18:33:53.317621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.398 [2024-07-26 18:33:53.317634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.398 [2024-07-26 18:33:53.317664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.398 qpair failed and we were unable to recover it. 00:33:27.398 [2024-07-26 18:33:53.327516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.398 [2024-07-26 18:33:53.327649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.398 [2024-07-26 18:33:53.327675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.398 [2024-07-26 18:33:53.327690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.398 [2024-07-26 18:33:53.327709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.398 [2024-07-26 18:33:53.327742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.398 qpair failed and we were unable to recover it. 00:33:27.398 [2024-07-26 18:33:53.337514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.398 [2024-07-26 18:33:53.337684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.398 [2024-07-26 18:33:53.337710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.398 [2024-07-26 18:33:53.337725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.398 [2024-07-26 18:33:53.337739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.398 [2024-07-26 18:33:53.337770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.398 qpair failed and we were unable to recover it. 00:33:27.398 [2024-07-26 18:33:53.347542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.398 [2024-07-26 18:33:53.347683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.398 [2024-07-26 18:33:53.347710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.398 [2024-07-26 18:33:53.347725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.398 [2024-07-26 18:33:53.347742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.398 [2024-07-26 18:33:53.347773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.398 qpair failed and we were unable to recover it. 00:33:27.398 [2024-07-26 18:33:53.357529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.398 [2024-07-26 18:33:53.357664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.398 [2024-07-26 18:33:53.357690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.398 [2024-07-26 18:33:53.357705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.398 [2024-07-26 18:33:53.357719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.398 [2024-07-26 18:33:53.357750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.398 qpair failed and we were unable to recover it. 00:33:27.398 [2024-07-26 18:33:53.367608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.398 [2024-07-26 18:33:53.367789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.398 [2024-07-26 18:33:53.367815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.398 [2024-07-26 18:33:53.367829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.398 [2024-07-26 18:33:53.367843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.398 [2024-07-26 18:33:53.367884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.398 qpair failed and we were unable to recover it. 00:33:27.398 [2024-07-26 18:33:53.377603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.398 [2024-07-26 18:33:53.377740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.398 [2024-07-26 18:33:53.377766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.398 [2024-07-26 18:33:53.377781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.398 [2024-07-26 18:33:53.377795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.398 [2024-07-26 18:33:53.377825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.398 qpair failed and we were unable to recover it. 00:33:27.398 [2024-07-26 18:33:53.387633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.398 [2024-07-26 18:33:53.387767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.398 [2024-07-26 18:33:53.387793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.398 [2024-07-26 18:33:53.387808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.398 [2024-07-26 18:33:53.387820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.398 [2024-07-26 18:33:53.387851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.398 qpair failed and we were unable to recover it. 00:33:27.398 [2024-07-26 18:33:53.397675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.398 [2024-07-26 18:33:53.397814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.398 [2024-07-26 18:33:53.397840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.398 [2024-07-26 18:33:53.397855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.398 [2024-07-26 18:33:53.397868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.399 [2024-07-26 18:33:53.397898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.399 qpair failed and we were unable to recover it. 00:33:27.399 [2024-07-26 18:33:53.407686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.399 [2024-07-26 18:33:53.407821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.399 [2024-07-26 18:33:53.407848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.399 [2024-07-26 18:33:53.407867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.399 [2024-07-26 18:33:53.407881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.399 [2024-07-26 18:33:53.407915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.399 qpair failed and we were unable to recover it. 00:33:27.399 [2024-07-26 18:33:53.417743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.399 [2024-07-26 18:33:53.417884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.399 [2024-07-26 18:33:53.417911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.399 [2024-07-26 18:33:53.417932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.399 [2024-07-26 18:33:53.417947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.399 [2024-07-26 18:33:53.417978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.399 qpair failed and we were unable to recover it. 00:33:27.399 [2024-07-26 18:33:53.427747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.399 [2024-07-26 18:33:53.427882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.399 [2024-07-26 18:33:53.427908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.399 [2024-07-26 18:33:53.427922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.399 [2024-07-26 18:33:53.427936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.399 [2024-07-26 18:33:53.427966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.399 qpair failed and we were unable to recover it. 00:33:27.399 [2024-07-26 18:33:53.437807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.399 [2024-07-26 18:33:53.437935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.399 [2024-07-26 18:33:53.437962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.399 [2024-07-26 18:33:53.437977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.399 [2024-07-26 18:33:53.437990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.399 [2024-07-26 18:33:53.438021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.399 qpair failed and we were unable to recover it. 00:33:27.399 [2024-07-26 18:33:53.447792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.399 [2024-07-26 18:33:53.447922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.399 [2024-07-26 18:33:53.447948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.399 [2024-07-26 18:33:53.447962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.399 [2024-07-26 18:33:53.447976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.399 [2024-07-26 18:33:53.448006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.399 qpair failed and we were unable to recover it. 00:33:27.399 [2024-07-26 18:33:53.457824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.399 [2024-07-26 18:33:53.457959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.399 [2024-07-26 18:33:53.457985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.399 [2024-07-26 18:33:53.457999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.399 [2024-07-26 18:33:53.458013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.399 [2024-07-26 18:33:53.458042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.399 qpair failed and we were unable to recover it. 00:33:27.399 [2024-07-26 18:33:53.467876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.399 [2024-07-26 18:33:53.468014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.399 [2024-07-26 18:33:53.468039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.399 [2024-07-26 18:33:53.468053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.399 [2024-07-26 18:33:53.468075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.399 [2024-07-26 18:33:53.468107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.399 qpair failed and we were unable to recover it. 00:33:27.399 [2024-07-26 18:33:53.477902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.399 [2024-07-26 18:33:53.478032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.399 [2024-07-26 18:33:53.478066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.399 [2024-07-26 18:33:53.478084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.399 [2024-07-26 18:33:53.478098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.399 [2024-07-26 18:33:53.478128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.399 qpair failed and we were unable to recover it. 00:33:27.399 [2024-07-26 18:33:53.487908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.399 [2024-07-26 18:33:53.488047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.399 [2024-07-26 18:33:53.488080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.399 [2024-07-26 18:33:53.488095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.399 [2024-07-26 18:33:53.488109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.399 [2024-07-26 18:33:53.488140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.399 qpair failed and we were unable to recover it. 00:33:27.399 [2024-07-26 18:33:53.497970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.399 [2024-07-26 18:33:53.498117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.399 [2024-07-26 18:33:53.498143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.399 [2024-07-26 18:33:53.498158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.399 [2024-07-26 18:33:53.498172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.399 [2024-07-26 18:33:53.498203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.399 qpair failed and we were unable to recover it. 00:33:27.400 [2024-07-26 18:33:53.507983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.400 [2024-07-26 18:33:53.508124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.400 [2024-07-26 18:33:53.508156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.400 [2024-07-26 18:33:53.508171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.400 [2024-07-26 18:33:53.508185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.400 [2024-07-26 18:33:53.508215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.400 qpair failed and we were unable to recover it. 00:33:27.400 [2024-07-26 18:33:53.518009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.400 [2024-07-26 18:33:53.518148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.400 [2024-07-26 18:33:53.518174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.400 [2024-07-26 18:33:53.518188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.400 [2024-07-26 18:33:53.518202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.400 [2024-07-26 18:33:53.518233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.400 qpair failed and we were unable to recover it. 00:33:27.400 [2024-07-26 18:33:53.528037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.400 [2024-07-26 18:33:53.528194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.400 [2024-07-26 18:33:53.528220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.400 [2024-07-26 18:33:53.528235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.400 [2024-07-26 18:33:53.528248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.400 [2024-07-26 18:33:53.528280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.400 qpair failed and we were unable to recover it. 00:33:27.400 [2024-07-26 18:33:53.538092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.400 [2024-07-26 18:33:53.538247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.400 [2024-07-26 18:33:53.538273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.400 [2024-07-26 18:33:53.538287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.400 [2024-07-26 18:33:53.538301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.400 [2024-07-26 18:33:53.538331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.400 qpair failed and we were unable to recover it. 00:33:27.661 [2024-07-26 18:33:53.548113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.661 [2024-07-26 18:33:53.548253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.661 [2024-07-26 18:33:53.548278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.661 [2024-07-26 18:33:53.548293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.661 [2024-07-26 18:33:53.548309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.661 [2024-07-26 18:33:53.548346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.661 qpair failed and we were unable to recover it. 00:33:27.661 [2024-07-26 18:33:53.558116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.661 [2024-07-26 18:33:53.558248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.661 [2024-07-26 18:33:53.558274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.661 [2024-07-26 18:33:53.558289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.661 [2024-07-26 18:33:53.558303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.661 [2024-07-26 18:33:53.558333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.661 qpair failed and we were unable to recover it. 00:33:27.662 [2024-07-26 18:33:53.568166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.662 [2024-07-26 18:33:53.568301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.662 [2024-07-26 18:33:53.568326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.662 [2024-07-26 18:33:53.568341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.662 [2024-07-26 18:33:53.568354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.662 [2024-07-26 18:33:53.568384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.662 qpair failed and we were unable to recover it. 00:33:27.662 [2024-07-26 18:33:53.578178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.662 [2024-07-26 18:33:53.578315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.662 [2024-07-26 18:33:53.578341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.662 [2024-07-26 18:33:53.578356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.662 [2024-07-26 18:33:53.578369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.662 [2024-07-26 18:33:53.578399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.662 qpair failed and we were unable to recover it. 00:33:27.662 [2024-07-26 18:33:53.588176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.662 [2024-07-26 18:33:53.588310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.662 [2024-07-26 18:33:53.588336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.662 [2024-07-26 18:33:53.588350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.662 [2024-07-26 18:33:53.588364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.662 [2024-07-26 18:33:53.588394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.662 qpair failed and we were unable to recover it. 00:33:27.662 [2024-07-26 18:33:53.598212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.662 [2024-07-26 18:33:53.598350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.662 [2024-07-26 18:33:53.598381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.662 [2024-07-26 18:33:53.598396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.662 [2024-07-26 18:33:53.598410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.662 [2024-07-26 18:33:53.598440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.662 qpair failed and we were unable to recover it. 00:33:27.662 [2024-07-26 18:33:53.608264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.662 [2024-07-26 18:33:53.608396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.662 [2024-07-26 18:33:53.608422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.662 [2024-07-26 18:33:53.608436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.662 [2024-07-26 18:33:53.608449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.662 [2024-07-26 18:33:53.608479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.662 qpair failed and we were unable to recover it. 00:33:27.662 [2024-07-26 18:33:53.618287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.662 [2024-07-26 18:33:53.618440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.662 [2024-07-26 18:33:53.618466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.662 [2024-07-26 18:33:53.618481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.662 [2024-07-26 18:33:53.618494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.662 [2024-07-26 18:33:53.618525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.662 qpair failed and we were unable to recover it. 00:33:27.662 [2024-07-26 18:33:53.628322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.662 [2024-07-26 18:33:53.628456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.662 [2024-07-26 18:33:53.628481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.662 [2024-07-26 18:33:53.628496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.662 [2024-07-26 18:33:53.628509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.662 [2024-07-26 18:33:53.628541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.662 qpair failed and we were unable to recover it. 00:33:27.662 [2024-07-26 18:33:53.638343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.662 [2024-07-26 18:33:53.638477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.662 [2024-07-26 18:33:53.638503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.662 [2024-07-26 18:33:53.638518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.662 [2024-07-26 18:33:53.638531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.662 [2024-07-26 18:33:53.638567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.662 qpair failed and we were unable to recover it. 00:33:27.662 [2024-07-26 18:33:53.648344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.662 [2024-07-26 18:33:53.648477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.662 [2024-07-26 18:33:53.648503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.662 [2024-07-26 18:33:53.648521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.662 [2024-07-26 18:33:53.648535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.662 [2024-07-26 18:33:53.648566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.662 qpair failed and we were unable to recover it. 00:33:27.662 [2024-07-26 18:33:53.658393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.662 [2024-07-26 18:33:53.658532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.662 [2024-07-26 18:33:53.658558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.662 [2024-07-26 18:33:53.658573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.662 [2024-07-26 18:33:53.658587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.662 [2024-07-26 18:33:53.658617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.662 qpair failed and we were unable to recover it. 00:33:27.662 [2024-07-26 18:33:53.668512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.662 [2024-07-26 18:33:53.668649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.662 [2024-07-26 18:33:53.668675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.662 [2024-07-26 18:33:53.668690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.662 [2024-07-26 18:33:53.668704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.662 [2024-07-26 18:33:53.668748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.662 qpair failed and we were unable to recover it. 00:33:27.662 [2024-07-26 18:33:53.678438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.662 [2024-07-26 18:33:53.678574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.662 [2024-07-26 18:33:53.678599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.662 [2024-07-26 18:33:53.678613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.662 [2024-07-26 18:33:53.678627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.662 [2024-07-26 18:33:53.678657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.663 qpair failed and we were unable to recover it. 00:33:27.663 [2024-07-26 18:33:53.688504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.663 [2024-07-26 18:33:53.688656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.663 [2024-07-26 18:33:53.688687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.663 [2024-07-26 18:33:53.688703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.663 [2024-07-26 18:33:53.688716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.663 [2024-07-26 18:33:53.688746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.663 qpair failed and we were unable to recover it. 00:33:27.663 [2024-07-26 18:33:53.698551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.663 [2024-07-26 18:33:53.698691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.663 [2024-07-26 18:33:53.698716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.663 [2024-07-26 18:33:53.698731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.663 [2024-07-26 18:33:53.698744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.663 [2024-07-26 18:33:53.698777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.663 qpair failed and we were unable to recover it. 00:33:27.663 [2024-07-26 18:33:53.708539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.663 [2024-07-26 18:33:53.708674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.663 [2024-07-26 18:33:53.708699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.663 [2024-07-26 18:33:53.708714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.663 [2024-07-26 18:33:53.708727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.663 [2024-07-26 18:33:53.708757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.663 qpair failed and we were unable to recover it. 00:33:27.663 [2024-07-26 18:33:53.718624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.663 [2024-07-26 18:33:53.718769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.663 [2024-07-26 18:33:53.718794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.663 [2024-07-26 18:33:53.718809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.663 [2024-07-26 18:33:53.718823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.663 [2024-07-26 18:33:53.718856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.663 qpair failed and we were unable to recover it. 00:33:27.663 [2024-07-26 18:33:53.728616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.663 [2024-07-26 18:33:53.728753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.663 [2024-07-26 18:33:53.728778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.663 [2024-07-26 18:33:53.728793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.663 [2024-07-26 18:33:53.728812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.663 [2024-07-26 18:33:53.728845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.663 qpair failed and we were unable to recover it. 00:33:27.663 [2024-07-26 18:33:53.738660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.663 [2024-07-26 18:33:53.738807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.663 [2024-07-26 18:33:53.738832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.663 [2024-07-26 18:33:53.738847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.663 [2024-07-26 18:33:53.738861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.663 [2024-07-26 18:33:53.738891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.663 qpair failed and we were unable to recover it. 00:33:27.663 [2024-07-26 18:33:53.748662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.663 [2024-07-26 18:33:53.748799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.663 [2024-07-26 18:33:53.748824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.663 [2024-07-26 18:33:53.748839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.663 [2024-07-26 18:33:53.748852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.663 [2024-07-26 18:33:53.748883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.663 qpair failed and we were unable to recover it. 00:33:27.663 [2024-07-26 18:33:53.758665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.663 [2024-07-26 18:33:53.758799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.663 [2024-07-26 18:33:53.758824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.663 [2024-07-26 18:33:53.758839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.663 [2024-07-26 18:33:53.758852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.663 [2024-07-26 18:33:53.758882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.663 qpair failed and we were unable to recover it. 00:33:27.663 [2024-07-26 18:33:53.768720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.663 [2024-07-26 18:33:53.768850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.663 [2024-07-26 18:33:53.768876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.663 [2024-07-26 18:33:53.768891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.663 [2024-07-26 18:33:53.768905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.663 [2024-07-26 18:33:53.768935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.663 qpair failed and we were unable to recover it. 00:33:27.663 [2024-07-26 18:33:53.778748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.663 [2024-07-26 18:33:53.778895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.663 [2024-07-26 18:33:53.778922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.663 [2024-07-26 18:33:53.778937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.663 [2024-07-26 18:33:53.778950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.663 [2024-07-26 18:33:53.778980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.663 qpair failed and we were unable to recover it. 00:33:27.663 [2024-07-26 18:33:53.788797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.663 [2024-07-26 18:33:53.788965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.663 [2024-07-26 18:33:53.788991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.663 [2024-07-26 18:33:53.789006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.663 [2024-07-26 18:33:53.789019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.663 [2024-07-26 18:33:53.789050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.663 qpair failed and we were unable to recover it. 00:33:27.663 [2024-07-26 18:33:53.798802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.663 [2024-07-26 18:33:53.798942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.663 [2024-07-26 18:33:53.798968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.663 [2024-07-26 18:33:53.798983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.663 [2024-07-26 18:33:53.798996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.663 [2024-07-26 18:33:53.799027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.664 qpair failed and we were unable to recover it. 00:33:27.926 [2024-07-26 18:33:53.808829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.926 [2024-07-26 18:33:53.808964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.926 [2024-07-26 18:33:53.808991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.926 [2024-07-26 18:33:53.809005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.926 [2024-07-26 18:33:53.809019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.926 [2024-07-26 18:33:53.809049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.926 qpair failed and we were unable to recover it. 00:33:27.926 [2024-07-26 18:33:53.818884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.926 [2024-07-26 18:33:53.819021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.926 [2024-07-26 18:33:53.819047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.926 [2024-07-26 18:33:53.819080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.926 [2024-07-26 18:33:53.819096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.926 [2024-07-26 18:33:53.819128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.926 qpair failed and we were unable to recover it. 00:33:27.926 [2024-07-26 18:33:53.828898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.926 [2024-07-26 18:33:53.829034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.926 [2024-07-26 18:33:53.829068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.926 [2024-07-26 18:33:53.829086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.926 [2024-07-26 18:33:53.829100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.926 [2024-07-26 18:33:53.829132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.926 qpair failed and we were unable to recover it. 00:33:27.926 [2024-07-26 18:33:53.838914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.926 [2024-07-26 18:33:53.839051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.926 [2024-07-26 18:33:53.839084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.926 [2024-07-26 18:33:53.839099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.926 [2024-07-26 18:33:53.839114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.926 [2024-07-26 18:33:53.839145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.926 qpair failed and we were unable to recover it. 00:33:27.926 [2024-07-26 18:33:53.848943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.926 [2024-07-26 18:33:53.849084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.926 [2024-07-26 18:33:53.849110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.926 [2024-07-26 18:33:53.849125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.926 [2024-07-26 18:33:53.849139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.926 [2024-07-26 18:33:53.849170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.926 qpair failed and we were unable to recover it. 00:33:27.926 [2024-07-26 18:33:53.859004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.926 [2024-07-26 18:33:53.859146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.926 [2024-07-26 18:33:53.859172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.926 [2024-07-26 18:33:53.859187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.926 [2024-07-26 18:33:53.859200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.926 [2024-07-26 18:33:53.859233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.926 qpair failed and we were unable to recover it. 00:33:27.926 [2024-07-26 18:33:53.869010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.926 [2024-07-26 18:33:53.869155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.926 [2024-07-26 18:33:53.869181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.926 [2024-07-26 18:33:53.869197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.926 [2024-07-26 18:33:53.869210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.926 [2024-07-26 18:33:53.869240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.927 qpair failed and we were unable to recover it. 00:33:27.927 [2024-07-26 18:33:53.879019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.927 [2024-07-26 18:33:53.879155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.927 [2024-07-26 18:33:53.879182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.927 [2024-07-26 18:33:53.879196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.927 [2024-07-26 18:33:53.879209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.927 [2024-07-26 18:33:53.879240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.927 qpair failed and we were unable to recover it. 00:33:27.927 [2024-07-26 18:33:53.889049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.927 [2024-07-26 18:33:53.889190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.927 [2024-07-26 18:33:53.889216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.927 [2024-07-26 18:33:53.889230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.927 [2024-07-26 18:33:53.889244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.927 [2024-07-26 18:33:53.889273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.927 qpair failed and we were unable to recover it. 00:33:27.927 [2024-07-26 18:33:53.899103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.927 [2024-07-26 18:33:53.899241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.927 [2024-07-26 18:33:53.899266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.927 [2024-07-26 18:33:53.899281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.927 [2024-07-26 18:33:53.899294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.927 [2024-07-26 18:33:53.899325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.927 qpair failed and we were unable to recover it. 00:33:27.927 [2024-07-26 18:33:53.909193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.927 [2024-07-26 18:33:53.909321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.927 [2024-07-26 18:33:53.909348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.927 [2024-07-26 18:33:53.909369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.927 [2024-07-26 18:33:53.909383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.927 [2024-07-26 18:33:53.909427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.927 qpair failed and we were unable to recover it. 00:33:27.927 [2024-07-26 18:33:53.919137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.927 [2024-07-26 18:33:53.919266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.927 [2024-07-26 18:33:53.919293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.927 [2024-07-26 18:33:53.919308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.927 [2024-07-26 18:33:53.919321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.927 [2024-07-26 18:33:53.919351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.927 qpair failed and we were unable to recover it. 00:33:27.927 [2024-07-26 18:33:53.929162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.927 [2024-07-26 18:33:53.929296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.927 [2024-07-26 18:33:53.929322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.927 [2024-07-26 18:33:53.929336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.927 [2024-07-26 18:33:53.929350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.927 [2024-07-26 18:33:53.929380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.927 qpair failed and we were unable to recover it. 00:33:27.927 [2024-07-26 18:33:53.939200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.927 [2024-07-26 18:33:53.939335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.927 [2024-07-26 18:33:53.939361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.927 [2024-07-26 18:33:53.939375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.927 [2024-07-26 18:33:53.939389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.927 [2024-07-26 18:33:53.939419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.927 qpair failed and we were unable to recover it. 00:33:27.927 [2024-07-26 18:33:53.949229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.927 [2024-07-26 18:33:53.949368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.927 [2024-07-26 18:33:53.949396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.927 [2024-07-26 18:33:53.949414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.927 [2024-07-26 18:33:53.949427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.927 [2024-07-26 18:33:53.949459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.927 qpair failed and we were unable to recover it. 00:33:27.927 [2024-07-26 18:33:53.959293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.927 [2024-07-26 18:33:53.959429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.927 [2024-07-26 18:33:53.959455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.927 [2024-07-26 18:33:53.959470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.927 [2024-07-26 18:33:53.959483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.927 [2024-07-26 18:33:53.959515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.927 qpair failed and we were unable to recover it. 00:33:27.927 [2024-07-26 18:33:53.969288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.927 [2024-07-26 18:33:53.969427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.927 [2024-07-26 18:33:53.969453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.927 [2024-07-26 18:33:53.969467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.927 [2024-07-26 18:33:53.969481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.927 [2024-07-26 18:33:53.969511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.927 qpair failed and we were unable to recover it. 00:33:27.927 [2024-07-26 18:33:53.979312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.927 [2024-07-26 18:33:53.979451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.927 [2024-07-26 18:33:53.979477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.927 [2024-07-26 18:33:53.979492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.927 [2024-07-26 18:33:53.979505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.927 [2024-07-26 18:33:53.979535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.927 qpair failed and we were unable to recover it. 00:33:27.927 [2024-07-26 18:33:53.989376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.927 [2024-07-26 18:33:53.989512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.927 [2024-07-26 18:33:53.989537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.927 [2024-07-26 18:33:53.989552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.927 [2024-07-26 18:33:53.989565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.927 [2024-07-26 18:33:53.989607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.927 qpair failed and we were unable to recover it. 00:33:27.928 [2024-07-26 18:33:53.999382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.928 [2024-07-26 18:33:53.999538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.928 [2024-07-26 18:33:53.999568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.928 [2024-07-26 18:33:53.999583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.928 [2024-07-26 18:33:53.999596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.928 [2024-07-26 18:33:53.999627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.928 qpair failed and we were unable to recover it. 00:33:27.928 [2024-07-26 18:33:54.009451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.928 [2024-07-26 18:33:54.009614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.928 [2024-07-26 18:33:54.009640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.928 [2024-07-26 18:33:54.009654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.928 [2024-07-26 18:33:54.009668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.928 [2024-07-26 18:33:54.009699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.928 qpair failed and we were unable to recover it. 00:33:27.928 [2024-07-26 18:33:54.019416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.928 [2024-07-26 18:33:54.019561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.928 [2024-07-26 18:33:54.019587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.928 [2024-07-26 18:33:54.019602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.928 [2024-07-26 18:33:54.019616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.928 [2024-07-26 18:33:54.019648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.928 qpair failed and we were unable to recover it. 00:33:27.928 [2024-07-26 18:33:54.029426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.928 [2024-07-26 18:33:54.029571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.928 [2024-07-26 18:33:54.029597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.928 [2024-07-26 18:33:54.029612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.928 [2024-07-26 18:33:54.029625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.928 [2024-07-26 18:33:54.029658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.928 qpair failed and we were unable to recover it. 00:33:27.928 [2024-07-26 18:33:54.039451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.928 [2024-07-26 18:33:54.039582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.928 [2024-07-26 18:33:54.039608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.928 [2024-07-26 18:33:54.039623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.928 [2024-07-26 18:33:54.039636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.928 [2024-07-26 18:33:54.039674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.928 qpair failed and we were unable to recover it. 00:33:27.928 [2024-07-26 18:33:54.049542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.928 [2024-07-26 18:33:54.049701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.928 [2024-07-26 18:33:54.049728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.928 [2024-07-26 18:33:54.049743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.928 [2024-07-26 18:33:54.049760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.928 [2024-07-26 18:33:54.049792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.928 qpair failed and we were unable to recover it. 00:33:27.928 [2024-07-26 18:33:54.059562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.928 [2024-07-26 18:33:54.059707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.928 [2024-07-26 18:33:54.059734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.928 [2024-07-26 18:33:54.059749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.928 [2024-07-26 18:33:54.059763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:27.928 [2024-07-26 18:33:54.059806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.928 qpair failed and we were unable to recover it. 00:33:28.189 [2024-07-26 18:33:54.069566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.189 [2024-07-26 18:33:54.069700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.189 [2024-07-26 18:33:54.069727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.189 [2024-07-26 18:33:54.069742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.189 [2024-07-26 18:33:54.069755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.189 [2024-07-26 18:33:54.069788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.189 qpair failed and we were unable to recover it. 00:33:28.189 [2024-07-26 18:33:54.079663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.189 [2024-07-26 18:33:54.079790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.189 [2024-07-26 18:33:54.079817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.189 [2024-07-26 18:33:54.079832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.189 [2024-07-26 18:33:54.079845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.189 [2024-07-26 18:33:54.079887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.189 qpair failed and we were unable to recover it. 00:33:28.189 [2024-07-26 18:33:54.089643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.189 [2024-07-26 18:33:54.089782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.189 [2024-07-26 18:33:54.089814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.189 [2024-07-26 18:33:54.089830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.189 [2024-07-26 18:33:54.089843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.189 [2024-07-26 18:33:54.089874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.189 qpair failed and we were unable to recover it. 00:33:28.189 [2024-07-26 18:33:54.099654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.189 [2024-07-26 18:33:54.099790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.189 [2024-07-26 18:33:54.099816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.189 [2024-07-26 18:33:54.099831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.189 [2024-07-26 18:33:54.099844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.190 [2024-07-26 18:33:54.099875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.190 qpair failed and we were unable to recover it. 00:33:28.190 [2024-07-26 18:33:54.109693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.190 [2024-07-26 18:33:54.109850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.190 [2024-07-26 18:33:54.109877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.190 [2024-07-26 18:33:54.109892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.190 [2024-07-26 18:33:54.109909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.190 [2024-07-26 18:33:54.109941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.190 qpair failed and we were unable to recover it. 00:33:28.190 [2024-07-26 18:33:54.119687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.190 [2024-07-26 18:33:54.119850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.190 [2024-07-26 18:33:54.119876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.190 [2024-07-26 18:33:54.119891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.190 [2024-07-26 18:33:54.119905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.190 [2024-07-26 18:33:54.119935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.190 qpair failed and we were unable to recover it. 00:33:28.190 [2024-07-26 18:33:54.129715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.190 [2024-07-26 18:33:54.129874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.190 [2024-07-26 18:33:54.129900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.190 [2024-07-26 18:33:54.129915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.190 [2024-07-26 18:33:54.129934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.190 [2024-07-26 18:33:54.129966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.190 qpair failed and we were unable to recover it. 00:33:28.190 [2024-07-26 18:33:54.139757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.190 [2024-07-26 18:33:54.139897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.190 [2024-07-26 18:33:54.139923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.190 [2024-07-26 18:33:54.139938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.190 [2024-07-26 18:33:54.139951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.190 [2024-07-26 18:33:54.139982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.190 qpair failed and we were unable to recover it. 00:33:28.190 [2024-07-26 18:33:54.149752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.190 [2024-07-26 18:33:54.149893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.190 [2024-07-26 18:33:54.149920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.190 [2024-07-26 18:33:54.149935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.190 [2024-07-26 18:33:54.149948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.190 [2024-07-26 18:33:54.149979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.190 qpair failed and we were unable to recover it. 00:33:28.190 [2024-07-26 18:33:54.159799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.190 [2024-07-26 18:33:54.159935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.190 [2024-07-26 18:33:54.159962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.190 [2024-07-26 18:33:54.159977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.190 [2024-07-26 18:33:54.159991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.190 [2024-07-26 18:33:54.160021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.190 qpair failed and we were unable to recover it. 00:33:28.190 [2024-07-26 18:33:54.169813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.190 [2024-07-26 18:33:54.169943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.190 [2024-07-26 18:33:54.169969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.190 [2024-07-26 18:33:54.169983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.190 [2024-07-26 18:33:54.169997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.190 [2024-07-26 18:33:54.170029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.190 qpair failed and we were unable to recover it. 00:33:28.190 [2024-07-26 18:33:54.179874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.190 [2024-07-26 18:33:54.180056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.190 [2024-07-26 18:33:54.180091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.190 [2024-07-26 18:33:54.180118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.190 [2024-07-26 18:33:54.180132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.190 [2024-07-26 18:33:54.180162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.190 qpair failed and we were unable to recover it. 00:33:28.190 [2024-07-26 18:33:54.189884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.190 [2024-07-26 18:33:54.190072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.190 [2024-07-26 18:33:54.190099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.190 [2024-07-26 18:33:54.190113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.190 [2024-07-26 18:33:54.190127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.190 [2024-07-26 18:33:54.190157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.190 qpair failed and we were unable to recover it. 00:33:28.190 [2024-07-26 18:33:54.199919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.190 [2024-07-26 18:33:54.200054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.190 [2024-07-26 18:33:54.200090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.190 [2024-07-26 18:33:54.200106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.190 [2024-07-26 18:33:54.200118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.190 [2024-07-26 18:33:54.200149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.190 qpair failed and we were unable to recover it. 00:33:28.190 [2024-07-26 18:33:54.209924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.190 [2024-07-26 18:33:54.210057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.190 [2024-07-26 18:33:54.210090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.190 [2024-07-26 18:33:54.210105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.190 [2024-07-26 18:33:54.210118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.190 [2024-07-26 18:33:54.210151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.190 qpair failed and we were unable to recover it. 00:33:28.190 [2024-07-26 18:33:54.219962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.190 [2024-07-26 18:33:54.220107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.190 [2024-07-26 18:33:54.220134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.190 [2024-07-26 18:33:54.220155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.191 [2024-07-26 18:33:54.220169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.191 [2024-07-26 18:33:54.220200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.191 qpair failed and we were unable to recover it. 00:33:28.191 [2024-07-26 18:33:54.229998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.191 [2024-07-26 18:33:54.230145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.191 [2024-07-26 18:33:54.230171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.191 [2024-07-26 18:33:54.230187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.191 [2024-07-26 18:33:54.230200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.191 [2024-07-26 18:33:54.230231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.191 qpair failed and we were unable to recover it. 00:33:28.191 [2024-07-26 18:33:54.240055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.191 [2024-07-26 18:33:54.240218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.191 [2024-07-26 18:33:54.240244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.191 [2024-07-26 18:33:54.240259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.191 [2024-07-26 18:33:54.240272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.191 [2024-07-26 18:33:54.240304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.191 qpair failed and we were unable to recover it. 00:33:28.191 [2024-07-26 18:33:54.250052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.191 [2024-07-26 18:33:54.250206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.191 [2024-07-26 18:33:54.250231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.191 [2024-07-26 18:33:54.250245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.191 [2024-07-26 18:33:54.250257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.191 [2024-07-26 18:33:54.250288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.191 qpair failed and we were unable to recover it. 00:33:28.191 [2024-07-26 18:33:54.260124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.191 [2024-07-26 18:33:54.260295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.191 [2024-07-26 18:33:54.260322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.191 [2024-07-26 18:33:54.260336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.191 [2024-07-26 18:33:54.260350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.191 [2024-07-26 18:33:54.260381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.191 qpair failed and we were unable to recover it. 00:33:28.191 [2024-07-26 18:33:54.270116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.191 [2024-07-26 18:33:54.270247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.191 [2024-07-26 18:33:54.270272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.191 [2024-07-26 18:33:54.270287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.191 [2024-07-26 18:33:54.270300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.191 [2024-07-26 18:33:54.270331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.191 qpair failed and we were unable to recover it. 00:33:28.191 [2024-07-26 18:33:54.280123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.191 [2024-07-26 18:33:54.280255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.191 [2024-07-26 18:33:54.280280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.191 [2024-07-26 18:33:54.280295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.191 [2024-07-26 18:33:54.280309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.191 [2024-07-26 18:33:54.280339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.191 qpair failed and we were unable to recover it. 00:33:28.191 [2024-07-26 18:33:54.290145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.191 [2024-07-26 18:33:54.290281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.191 [2024-07-26 18:33:54.290308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.191 [2024-07-26 18:33:54.290322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.191 [2024-07-26 18:33:54.290336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.191 [2024-07-26 18:33:54.290366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.191 qpair failed and we were unable to recover it. 00:33:28.191 [2024-07-26 18:33:54.300244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.191 [2024-07-26 18:33:54.300429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.191 [2024-07-26 18:33:54.300455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.191 [2024-07-26 18:33:54.300470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.191 [2024-07-26 18:33:54.300483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.191 [2024-07-26 18:33:54.300514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.191 qpair failed and we were unable to recover it. 00:33:28.191 [2024-07-26 18:33:54.310269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.191 [2024-07-26 18:33:54.310408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.191 [2024-07-26 18:33:54.310434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.191 [2024-07-26 18:33:54.310455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.191 [2024-07-26 18:33:54.310469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.191 [2024-07-26 18:33:54.310500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.191 qpair failed and we were unable to recover it. 00:33:28.191 [2024-07-26 18:33:54.320257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.191 [2024-07-26 18:33:54.320409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.191 [2024-07-26 18:33:54.320436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.191 [2024-07-26 18:33:54.320451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.191 [2024-07-26 18:33:54.320464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.191 [2024-07-26 18:33:54.320495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.191 qpair failed and we were unable to recover it. 00:33:28.191 [2024-07-26 18:33:54.330303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.192 [2024-07-26 18:33:54.330486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.192 [2024-07-26 18:33:54.330512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.192 [2024-07-26 18:33:54.330527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.192 [2024-07-26 18:33:54.330541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.192 [2024-07-26 18:33:54.330571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.192 qpair failed and we were unable to recover it. 00:33:28.451 [2024-07-26 18:33:54.340300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.451 [2024-07-26 18:33:54.340438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.451 [2024-07-26 18:33:54.340464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.451 [2024-07-26 18:33:54.340479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.451 [2024-07-26 18:33:54.340492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.451 [2024-07-26 18:33:54.340524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-07-26 18:33:54.350341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.451 [2024-07-26 18:33:54.350480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.452 [2024-07-26 18:33:54.350505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.452 [2024-07-26 18:33:54.350520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.452 [2024-07-26 18:33:54.350533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.452 [2024-07-26 18:33:54.350564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-07-26 18:33:54.360413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.452 [2024-07-26 18:33:54.360572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.452 [2024-07-26 18:33:54.360597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.452 [2024-07-26 18:33:54.360612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.452 [2024-07-26 18:33:54.360625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.452 [2024-07-26 18:33:54.360655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-07-26 18:33:54.370408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.452 [2024-07-26 18:33:54.370558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.452 [2024-07-26 18:33:54.370583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.452 [2024-07-26 18:33:54.370598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.452 [2024-07-26 18:33:54.370611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.452 [2024-07-26 18:33:54.370642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-07-26 18:33:54.380426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.452 [2024-07-26 18:33:54.380571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.452 [2024-07-26 18:33:54.380596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.452 [2024-07-26 18:33:54.380611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.452 [2024-07-26 18:33:54.380624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.452 [2024-07-26 18:33:54.380654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-07-26 18:33:54.390534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.452 [2024-07-26 18:33:54.390668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.452 [2024-07-26 18:33:54.390694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.452 [2024-07-26 18:33:54.390709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.452 [2024-07-26 18:33:54.390722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.452 [2024-07-26 18:33:54.390764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-07-26 18:33:54.400488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.452 [2024-07-26 18:33:54.400625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.452 [2024-07-26 18:33:54.400655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.452 [2024-07-26 18:33:54.400671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.452 [2024-07-26 18:33:54.400684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.452 [2024-07-26 18:33:54.400715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-07-26 18:33:54.410506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.452 [2024-07-26 18:33:54.410638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.452 [2024-07-26 18:33:54.410663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.452 [2024-07-26 18:33:54.410678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.452 [2024-07-26 18:33:54.410691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.452 [2024-07-26 18:33:54.410721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-07-26 18:33:54.420565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.452 [2024-07-26 18:33:54.420702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.452 [2024-07-26 18:33:54.420727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.452 [2024-07-26 18:33:54.420742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.452 [2024-07-26 18:33:54.420756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.452 [2024-07-26 18:33:54.420787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-07-26 18:33:54.430587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.452 [2024-07-26 18:33:54.430728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.452 [2024-07-26 18:33:54.430754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.452 [2024-07-26 18:33:54.430769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.452 [2024-07-26 18:33:54.430782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.452 [2024-07-26 18:33:54.430813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-07-26 18:33:54.440589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.452 [2024-07-26 18:33:54.440722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.452 [2024-07-26 18:33:54.440747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.452 [2024-07-26 18:33:54.440762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.452 [2024-07-26 18:33:54.440775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.452 [2024-07-26 18:33:54.440812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-07-26 18:33:54.450712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.452 [2024-07-26 18:33:54.450857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.452 [2024-07-26 18:33:54.450882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.452 [2024-07-26 18:33:54.450897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.452 [2024-07-26 18:33:54.450910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.452 [2024-07-26 18:33:54.450952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-07-26 18:33:54.460692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.452 [2024-07-26 18:33:54.460832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.452 [2024-07-26 18:33:54.460857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.452 [2024-07-26 18:33:54.460872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.452 [2024-07-26 18:33:54.460886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.452 [2024-07-26 18:33:54.460917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-07-26 18:33:54.470671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.453 [2024-07-26 18:33:54.470805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.453 [2024-07-26 18:33:54.470831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.453 [2024-07-26 18:33:54.470845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.453 [2024-07-26 18:33:54.470859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.453 [2024-07-26 18:33:54.470891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-07-26 18:33:54.480690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.453 [2024-07-26 18:33:54.480823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.453 [2024-07-26 18:33:54.480849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.453 [2024-07-26 18:33:54.480864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.453 [2024-07-26 18:33:54.480876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.453 [2024-07-26 18:33:54.480907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-07-26 18:33:54.490751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.453 [2024-07-26 18:33:54.490885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.453 [2024-07-26 18:33:54.490916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.453 [2024-07-26 18:33:54.490932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.453 [2024-07-26 18:33:54.490946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.453 [2024-07-26 18:33:54.490976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-07-26 18:33:54.500776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.453 [2024-07-26 18:33:54.500920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.453 [2024-07-26 18:33:54.500946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.453 [2024-07-26 18:33:54.500960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.453 [2024-07-26 18:33:54.500974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.453 [2024-07-26 18:33:54.501005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-07-26 18:33:54.510815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.453 [2024-07-26 18:33:54.510950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.453 [2024-07-26 18:33:54.510975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.453 [2024-07-26 18:33:54.510990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.453 [2024-07-26 18:33:54.511003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.453 [2024-07-26 18:33:54.511034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-07-26 18:33:54.520825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.453 [2024-07-26 18:33:54.520961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.453 [2024-07-26 18:33:54.520987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.453 [2024-07-26 18:33:54.521002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.453 [2024-07-26 18:33:54.521015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.453 [2024-07-26 18:33:54.521047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-07-26 18:33:54.530847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.453 [2024-07-26 18:33:54.530977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.453 [2024-07-26 18:33:54.531003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.453 [2024-07-26 18:33:54.531018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.453 [2024-07-26 18:33:54.531037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.453 [2024-07-26 18:33:54.531076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-07-26 18:33:54.540919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.453 [2024-07-26 18:33:54.541066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.453 [2024-07-26 18:33:54.541091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.453 [2024-07-26 18:33:54.541106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.453 [2024-07-26 18:33:54.541120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.453 [2024-07-26 18:33:54.541153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-07-26 18:33:54.550897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.453 [2024-07-26 18:33:54.551036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.453 [2024-07-26 18:33:54.551068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.453 [2024-07-26 18:33:54.551084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.453 [2024-07-26 18:33:54.551098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.453 [2024-07-26 18:33:54.551130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-07-26 18:33:54.560964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.453 [2024-07-26 18:33:54.561136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.453 [2024-07-26 18:33:54.561162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.453 [2024-07-26 18:33:54.561178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.453 [2024-07-26 18:33:54.561192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.453 [2024-07-26 18:33:54.561224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-07-26 18:33:54.570981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.453 [2024-07-26 18:33:54.571124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.453 [2024-07-26 18:33:54.571150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.453 [2024-07-26 18:33:54.571166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.453 [2024-07-26 18:33:54.571179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.453 [2024-07-26 18:33:54.571209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-07-26 18:33:54.581033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.453 [2024-07-26 18:33:54.581207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.453 [2024-07-26 18:33:54.581234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.453 [2024-07-26 18:33:54.581249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.453 [2024-07-26 18:33:54.581262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.453 [2024-07-26 18:33:54.581292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-07-26 18:33:54.591034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.453 [2024-07-26 18:33:54.591195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.454 [2024-07-26 18:33:54.591221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.454 [2024-07-26 18:33:54.591236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.454 [2024-07-26 18:33:54.591250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.454 [2024-07-26 18:33:54.591282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.713 [2024-07-26 18:33:54.601050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.713 [2024-07-26 18:33:54.601208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.713 [2024-07-26 18:33:54.601233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.713 [2024-07-26 18:33:54.601248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.713 [2024-07-26 18:33:54.601261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.714 [2024-07-26 18:33:54.601292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.714 qpair failed and we were unable to recover it. 00:33:28.714 [2024-07-26 18:33:54.611093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.714 [2024-07-26 18:33:54.611217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.714 [2024-07-26 18:33:54.611244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.714 [2024-07-26 18:33:54.611258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.714 [2024-07-26 18:33:54.611272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.714 [2024-07-26 18:33:54.611304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.714 qpair failed and we were unable to recover it. 00:33:28.714 [2024-07-26 18:33:54.621122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.714 [2024-07-26 18:33:54.621298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.714 [2024-07-26 18:33:54.621324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.714 [2024-07-26 18:33:54.621339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.714 [2024-07-26 18:33:54.621361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.714 [2024-07-26 18:33:54.621394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.714 qpair failed and we were unable to recover it. 00:33:28.714 [2024-07-26 18:33:54.631166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.714 [2024-07-26 18:33:54.631308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.714 [2024-07-26 18:33:54.631335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.714 [2024-07-26 18:33:54.631354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.714 [2024-07-26 18:33:54.631369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.714 [2024-07-26 18:33:54.631400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.714 qpair failed and we were unable to recover it. 00:33:28.714 [2024-07-26 18:33:54.641246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.714 [2024-07-26 18:33:54.641378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.714 [2024-07-26 18:33:54.641404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.714 [2024-07-26 18:33:54.641419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.714 [2024-07-26 18:33:54.641432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.714 [2024-07-26 18:33:54.641476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.714 qpair failed and we were unable to recover it. 00:33:28.714 [2024-07-26 18:33:54.651197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.714 [2024-07-26 18:33:54.651352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.714 [2024-07-26 18:33:54.651378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.714 [2024-07-26 18:33:54.651392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.714 [2024-07-26 18:33:54.651406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.714 [2024-07-26 18:33:54.651436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.714 qpair failed and we were unable to recover it. 00:33:28.714 [2024-07-26 18:33:54.661243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.714 [2024-07-26 18:33:54.661377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.714 [2024-07-26 18:33:54.661402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.714 [2024-07-26 18:33:54.661417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.714 [2024-07-26 18:33:54.661431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.714 [2024-07-26 18:33:54.661462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.714 qpair failed and we were unable to recover it. 00:33:28.714 [2024-07-26 18:33:54.671285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.714 [2024-07-26 18:33:54.671421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.714 [2024-07-26 18:33:54.671447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.714 [2024-07-26 18:33:54.671467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.714 [2024-07-26 18:33:54.671481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.714 [2024-07-26 18:33:54.671514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.714 qpair failed and we were unable to recover it. 00:33:28.714 [2024-07-26 18:33:54.681329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.714 [2024-07-26 18:33:54.681507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.714 [2024-07-26 18:33:54.681532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.714 [2024-07-26 18:33:54.681547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.714 [2024-07-26 18:33:54.681561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.714 [2024-07-26 18:33:54.681591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.714 qpair failed and we were unable to recover it. 00:33:28.714 [2024-07-26 18:33:54.691336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.714 [2024-07-26 18:33:54.691471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.714 [2024-07-26 18:33:54.691496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.714 [2024-07-26 18:33:54.691511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.714 [2024-07-26 18:33:54.691525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.714 [2024-07-26 18:33:54.691556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.714 qpair failed and we were unable to recover it. 00:33:28.714 [2024-07-26 18:33:54.701374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.714 [2024-07-26 18:33:54.701559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.714 [2024-07-26 18:33:54.701584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.714 [2024-07-26 18:33:54.701599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.714 [2024-07-26 18:33:54.701612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.714 [2024-07-26 18:33:54.701645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.714 qpair failed and we were unable to recover it. 00:33:28.714 [2024-07-26 18:33:54.711386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.714 [2024-07-26 18:33:54.711516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.714 [2024-07-26 18:33:54.711542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.714 [2024-07-26 18:33:54.711563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.714 [2024-07-26 18:33:54.711576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.714 [2024-07-26 18:33:54.711607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.714 qpair failed and we were unable to recover it. 00:33:28.714 [2024-07-26 18:33:54.721400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.714 [2024-07-26 18:33:54.721549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.714 [2024-07-26 18:33:54.721574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.714 [2024-07-26 18:33:54.721589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.714 [2024-07-26 18:33:54.721602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.715 [2024-07-26 18:33:54.721632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.715 qpair failed and we were unable to recover it. 00:33:28.715 [2024-07-26 18:33:54.731418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.715 [2024-07-26 18:33:54.731550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.715 [2024-07-26 18:33:54.731576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.715 [2024-07-26 18:33:54.731590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.715 [2024-07-26 18:33:54.731603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.715 [2024-07-26 18:33:54.731633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.715 qpair failed and we were unable to recover it. 00:33:28.715 [2024-07-26 18:33:54.741449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.715 [2024-07-26 18:33:54.741631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.715 [2024-07-26 18:33:54.741656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.715 [2024-07-26 18:33:54.741671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.715 [2024-07-26 18:33:54.741684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.715 [2024-07-26 18:33:54.741714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.715 qpair failed and we were unable to recover it. 00:33:28.715 [2024-07-26 18:33:54.751465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.715 [2024-07-26 18:33:54.751601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.715 [2024-07-26 18:33:54.751627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.715 [2024-07-26 18:33:54.751643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.715 [2024-07-26 18:33:54.751657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.715 [2024-07-26 18:33:54.751687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.715 qpair failed and we were unable to recover it. 00:33:28.715 [2024-07-26 18:33:54.761528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.715 [2024-07-26 18:33:54.761670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.715 [2024-07-26 18:33:54.761696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.715 [2024-07-26 18:33:54.761711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.715 [2024-07-26 18:33:54.761724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.715 [2024-07-26 18:33:54.761753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.715 qpair failed and we were unable to recover it. 00:33:28.715 [2024-07-26 18:33:54.771607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.715 [2024-07-26 18:33:54.771741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.715 [2024-07-26 18:33:54.771767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.715 [2024-07-26 18:33:54.771781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.715 [2024-07-26 18:33:54.771795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.715 [2024-07-26 18:33:54.771826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.715 qpair failed and we were unable to recover it. 00:33:28.715 [2024-07-26 18:33:54.781557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.715 [2024-07-26 18:33:54.781701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.715 [2024-07-26 18:33:54.781727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.715 [2024-07-26 18:33:54.781742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.715 [2024-07-26 18:33:54.781756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.715 [2024-07-26 18:33:54.781787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.715 qpair failed and we were unable to recover it. 00:33:28.715 [2024-07-26 18:33:54.791645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.715 [2024-07-26 18:33:54.791780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.715 [2024-07-26 18:33:54.791809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.715 [2024-07-26 18:33:54.791824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.715 [2024-07-26 18:33:54.791837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.715 [2024-07-26 18:33:54.791868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.715 qpair failed and we were unable to recover it. 00:33:28.715 [2024-07-26 18:33:54.801615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.715 [2024-07-26 18:33:54.801749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.715 [2024-07-26 18:33:54.801780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.715 [2024-07-26 18:33:54.801799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.715 [2024-07-26 18:33:54.801813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.715 [2024-07-26 18:33:54.801845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.715 qpair failed and we were unable to recover it. 00:33:28.715 [2024-07-26 18:33:54.811680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.715 [2024-07-26 18:33:54.811854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.715 [2024-07-26 18:33:54.811880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.715 [2024-07-26 18:33:54.811895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.715 [2024-07-26 18:33:54.811909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.715 [2024-07-26 18:33:54.811940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.715 qpair failed and we were unable to recover it. 00:33:28.715 [2024-07-26 18:33:54.821684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.715 [2024-07-26 18:33:54.821825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.715 [2024-07-26 18:33:54.821850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.715 [2024-07-26 18:33:54.821866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.715 [2024-07-26 18:33:54.821879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.715 [2024-07-26 18:33:54.821912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.715 qpair failed and we were unable to recover it. 00:33:28.715 [2024-07-26 18:33:54.831761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.715 [2024-07-26 18:33:54.831942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.715 [2024-07-26 18:33:54.831968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.715 [2024-07-26 18:33:54.831983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.715 [2024-07-26 18:33:54.831997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.715 [2024-07-26 18:33:54.832027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.715 qpair failed and we were unable to recover it. 00:33:28.715 [2024-07-26 18:33:54.841851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.715 [2024-07-26 18:33:54.841998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.715 [2024-07-26 18:33:54.842024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.715 [2024-07-26 18:33:54.842040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.715 [2024-07-26 18:33:54.842053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.715 [2024-07-26 18:33:54.842115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.715 qpair failed and we were unable to recover it. 00:33:28.716 [2024-07-26 18:33:54.851799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.716 [2024-07-26 18:33:54.851933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.716 [2024-07-26 18:33:54.851962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.716 [2024-07-26 18:33:54.851979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.716 [2024-07-26 18:33:54.851992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.716 [2024-07-26 18:33:54.852024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.716 qpair failed and we were unable to recover it. 00:33:28.976 [2024-07-26 18:33:54.861821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.976 [2024-07-26 18:33:54.861962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.976 [2024-07-26 18:33:54.861989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.976 [2024-07-26 18:33:54.862004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.976 [2024-07-26 18:33:54.862018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.976 [2024-07-26 18:33:54.862048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.976 qpair failed and we were unable to recover it. 00:33:28.976 [2024-07-26 18:33:54.871826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.976 [2024-07-26 18:33:54.871965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.976 [2024-07-26 18:33:54.871992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.976 [2024-07-26 18:33:54.872010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.976 [2024-07-26 18:33:54.872024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.976 [2024-07-26 18:33:54.872064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.976 qpair failed and we were unable to recover it. 00:33:28.976 [2024-07-26 18:33:54.881862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.976 [2024-07-26 18:33:54.882067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.976 [2024-07-26 18:33:54.882094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.976 [2024-07-26 18:33:54.882110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.976 [2024-07-26 18:33:54.882124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.976 [2024-07-26 18:33:54.882158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.976 qpair failed and we were unable to recover it. 00:33:28.976 [2024-07-26 18:33:54.891970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.976 [2024-07-26 18:33:54.892110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.976 [2024-07-26 18:33:54.892142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.976 [2024-07-26 18:33:54.892158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.976 [2024-07-26 18:33:54.892172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.976 [2024-07-26 18:33:54.892215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.976 qpair failed and we were unable to recover it. 00:33:28.976 [2024-07-26 18:33:54.901917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.976 [2024-07-26 18:33:54.902054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.976 [2024-07-26 18:33:54.902087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.976 [2024-07-26 18:33:54.902102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.976 [2024-07-26 18:33:54.902116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.976 [2024-07-26 18:33:54.902147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.976 qpair failed and we were unable to recover it. 00:33:28.976 [2024-07-26 18:33:54.911968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.976 [2024-07-26 18:33:54.912119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.976 [2024-07-26 18:33:54.912145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.976 [2024-07-26 18:33:54.912160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.976 [2024-07-26 18:33:54.912174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.976 [2024-07-26 18:33:54.912204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.976 qpair failed and we were unable to recover it. 00:33:28.976 [2024-07-26 18:33:54.921987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.976 [2024-07-26 18:33:54.922128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.976 [2024-07-26 18:33:54.922154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.976 [2024-07-26 18:33:54.922168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.976 [2024-07-26 18:33:54.922182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.976 [2024-07-26 18:33:54.922212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.976 qpair failed and we were unable to recover it. 00:33:28.976 [2024-07-26 18:33:54.932033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.976 [2024-07-26 18:33:54.932203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.976 [2024-07-26 18:33:54.932230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.976 [2024-07-26 18:33:54.932245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.976 [2024-07-26 18:33:54.932258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.976 [2024-07-26 18:33:54.932306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.977 qpair failed and we were unable to recover it. 00:33:28.977 [2024-07-26 18:33:54.942067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.977 [2024-07-26 18:33:54.942211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.977 [2024-07-26 18:33:54.942237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.977 [2024-07-26 18:33:54.942252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.977 [2024-07-26 18:33:54.942266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.977 [2024-07-26 18:33:54.942296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.977 qpair failed and we were unable to recover it. 00:33:28.977 [2024-07-26 18:33:54.952090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.977 [2024-07-26 18:33:54.952229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.977 [2024-07-26 18:33:54.952254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.977 [2024-07-26 18:33:54.952269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.977 [2024-07-26 18:33:54.952282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.977 [2024-07-26 18:33:54.952313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.977 qpair failed and we were unable to recover it. 00:33:28.977 [2024-07-26 18:33:54.962126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.977 [2024-07-26 18:33:54.962305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.977 [2024-07-26 18:33:54.962330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.977 [2024-07-26 18:33:54.962345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.977 [2024-07-26 18:33:54.962358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.977 [2024-07-26 18:33:54.962390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.977 qpair failed and we were unable to recover it. 00:33:28.977 [2024-07-26 18:33:54.972144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.977 [2024-07-26 18:33:54.972294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.977 [2024-07-26 18:33:54.972319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.977 [2024-07-26 18:33:54.972334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.977 [2024-07-26 18:33:54.972348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.977 [2024-07-26 18:33:54.972378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.977 qpair failed and we were unable to recover it. 00:33:28.977 [2024-07-26 18:33:54.982157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.977 [2024-07-26 18:33:54.982298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.977 [2024-07-26 18:33:54.982323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.977 [2024-07-26 18:33:54.982337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.977 [2024-07-26 18:33:54.982351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.977 [2024-07-26 18:33:54.982381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.977 qpair failed and we were unable to recover it. 00:33:28.977 [2024-07-26 18:33:54.992195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.977 [2024-07-26 18:33:54.992330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.977 [2024-07-26 18:33:54.992356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.977 [2024-07-26 18:33:54.992370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.977 [2024-07-26 18:33:54.992384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.977 [2024-07-26 18:33:54.992414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.977 qpair failed and we were unable to recover it. 00:33:28.977 [2024-07-26 18:33:55.002210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.977 [2024-07-26 18:33:55.002348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.977 [2024-07-26 18:33:55.002374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.977 [2024-07-26 18:33:55.002389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.977 [2024-07-26 18:33:55.002403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.977 [2024-07-26 18:33:55.002433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.977 qpair failed and we were unable to recover it. 00:33:28.977 [2024-07-26 18:33:55.012245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.977 [2024-07-26 18:33:55.012382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.977 [2024-07-26 18:33:55.012408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.977 [2024-07-26 18:33:55.012423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.977 [2024-07-26 18:33:55.012437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.977 [2024-07-26 18:33:55.012467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.977 qpair failed and we were unable to recover it. 00:33:28.977 [2024-07-26 18:33:55.022313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.977 [2024-07-26 18:33:55.022453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.977 [2024-07-26 18:33:55.022479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.977 [2024-07-26 18:33:55.022494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.977 [2024-07-26 18:33:55.022512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.977 [2024-07-26 18:33:55.022544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.977 qpair failed and we were unable to recover it. 00:33:28.977 [2024-07-26 18:33:55.032377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.977 [2024-07-26 18:33:55.032508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.977 [2024-07-26 18:33:55.032534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.977 [2024-07-26 18:33:55.032549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.977 [2024-07-26 18:33:55.032562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.977 [2024-07-26 18:33:55.032604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.977 qpair failed and we were unable to recover it. 00:33:28.977 [2024-07-26 18:33:55.042348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.977 [2024-07-26 18:33:55.042525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.977 [2024-07-26 18:33:55.042551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.977 [2024-07-26 18:33:55.042566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.977 [2024-07-26 18:33:55.042580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.977 [2024-07-26 18:33:55.042611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.977 qpair failed and we were unable to recover it. 00:33:28.977 [2024-07-26 18:33:55.052352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.978 [2024-07-26 18:33:55.052485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.978 [2024-07-26 18:33:55.052512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.978 [2024-07-26 18:33:55.052526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.978 [2024-07-26 18:33:55.052540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.978 [2024-07-26 18:33:55.052571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.978 qpair failed and we were unable to recover it. 00:33:28.978 [2024-07-26 18:33:55.062451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.978 [2024-07-26 18:33:55.062589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.978 [2024-07-26 18:33:55.062615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.978 [2024-07-26 18:33:55.062630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.978 [2024-07-26 18:33:55.062643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.978 [2024-07-26 18:33:55.062675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.978 qpair failed and we were unable to recover it. 00:33:28.978 [2024-07-26 18:33:55.072399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.978 [2024-07-26 18:33:55.072534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.978 [2024-07-26 18:33:55.072560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.978 [2024-07-26 18:33:55.072575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.978 [2024-07-26 18:33:55.072588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.978 [2024-07-26 18:33:55.072619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.978 qpair failed and we were unable to recover it. 00:33:28.978 [2024-07-26 18:33:55.082435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.978 [2024-07-26 18:33:55.082567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.978 [2024-07-26 18:33:55.082593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.978 [2024-07-26 18:33:55.082608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.978 [2024-07-26 18:33:55.082621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.978 [2024-07-26 18:33:55.082663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.978 qpair failed and we were unable to recover it. 00:33:28.978 [2024-07-26 18:33:55.092506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.978 [2024-07-26 18:33:55.092692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.978 [2024-07-26 18:33:55.092719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.978 [2024-07-26 18:33:55.092734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.978 [2024-07-26 18:33:55.092751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.978 [2024-07-26 18:33:55.092785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.978 qpair failed and we were unable to recover it. 00:33:28.978 [2024-07-26 18:33:55.102509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.978 [2024-07-26 18:33:55.102646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.978 [2024-07-26 18:33:55.102671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.978 [2024-07-26 18:33:55.102687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.978 [2024-07-26 18:33:55.102700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.978 [2024-07-26 18:33:55.102731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.978 qpair failed and we were unable to recover it. 00:33:28.978 [2024-07-26 18:33:55.112536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.978 [2024-07-26 18:33:55.112670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.978 [2024-07-26 18:33:55.112696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.978 [2024-07-26 18:33:55.112717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.978 [2024-07-26 18:33:55.112731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:28.978 [2024-07-26 18:33:55.112763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.978 qpair failed and we were unable to recover it. 00:33:29.238 [2024-07-26 18:33:55.122544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.238 [2024-07-26 18:33:55.122677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.238 [2024-07-26 18:33:55.122703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.238 [2024-07-26 18:33:55.122718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.238 [2024-07-26 18:33:55.122731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.238 [2024-07-26 18:33:55.122761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.238 qpair failed and we were unable to recover it. 00:33:29.238 [2024-07-26 18:33:55.132578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.239 [2024-07-26 18:33:55.132711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.239 [2024-07-26 18:33:55.132737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.239 [2024-07-26 18:33:55.132751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.239 [2024-07-26 18:33:55.132765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.239 [2024-07-26 18:33:55.132794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.239 qpair failed and we were unable to recover it. 00:33:29.239 [2024-07-26 18:33:55.142635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.239 [2024-07-26 18:33:55.142789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.239 [2024-07-26 18:33:55.142814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.239 [2024-07-26 18:33:55.142829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.239 [2024-07-26 18:33:55.142843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.239 [2024-07-26 18:33:55.142872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.239 qpair failed and we were unable to recover it. 00:33:29.239 [2024-07-26 18:33:55.152673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.239 [2024-07-26 18:33:55.152807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.239 [2024-07-26 18:33:55.152832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.239 [2024-07-26 18:33:55.152847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.239 [2024-07-26 18:33:55.152860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.239 [2024-07-26 18:33:55.152891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.239 qpair failed and we were unable to recover it. 00:33:29.239 [2024-07-26 18:33:55.162666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.239 [2024-07-26 18:33:55.162811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.239 [2024-07-26 18:33:55.162836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.239 [2024-07-26 18:33:55.162850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.239 [2024-07-26 18:33:55.162864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.239 [2024-07-26 18:33:55.162894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.239 qpair failed and we were unable to recover it. 00:33:29.239 [2024-07-26 18:33:55.172688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.239 [2024-07-26 18:33:55.172821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.239 [2024-07-26 18:33:55.172847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.239 [2024-07-26 18:33:55.172862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.239 [2024-07-26 18:33:55.172875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.239 [2024-07-26 18:33:55.172905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.239 qpair failed and we were unable to recover it. 00:33:29.239 [2024-07-26 18:33:55.182733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.239 [2024-07-26 18:33:55.182910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.239 [2024-07-26 18:33:55.182936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.239 [2024-07-26 18:33:55.182951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.239 [2024-07-26 18:33:55.182965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.239 [2024-07-26 18:33:55.182995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.239 qpair failed and we were unable to recover it. 00:33:29.239 [2024-07-26 18:33:55.192850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.239 [2024-07-26 18:33:55.193001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.239 [2024-07-26 18:33:55.193028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.239 [2024-07-26 18:33:55.193048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.239 [2024-07-26 18:33:55.193068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.239 [2024-07-26 18:33:55.193113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.239 qpair failed and we were unable to recover it. 00:33:29.239 [2024-07-26 18:33:55.202783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.239 [2024-07-26 18:33:55.202913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.239 [2024-07-26 18:33:55.202945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.239 [2024-07-26 18:33:55.202960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.239 [2024-07-26 18:33:55.202973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.239 [2024-07-26 18:33:55.203003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.239 qpair failed and we were unable to recover it. 00:33:29.239 [2024-07-26 18:33:55.212817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.239 [2024-07-26 18:33:55.212963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.239 [2024-07-26 18:33:55.212990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.239 [2024-07-26 18:33:55.213005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.240 [2024-07-26 18:33:55.213019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.240 [2024-07-26 18:33:55.213050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.240 qpair failed and we were unable to recover it. 00:33:29.240 [2024-07-26 18:33:55.222850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.240 [2024-07-26 18:33:55.222988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.240 [2024-07-26 18:33:55.223014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.240 [2024-07-26 18:33:55.223029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.240 [2024-07-26 18:33:55.223042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.240 [2024-07-26 18:33:55.223091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.240 qpair failed and we were unable to recover it. 00:33:29.240 [2024-07-26 18:33:55.232884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.240 [2024-07-26 18:33:55.233026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.240 [2024-07-26 18:33:55.233054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.240 [2024-07-26 18:33:55.233077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.240 [2024-07-26 18:33:55.233091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.240 [2024-07-26 18:33:55.233122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.240 qpair failed and we were unable to recover it. 00:33:29.240 [2024-07-26 18:33:55.242913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.240 [2024-07-26 18:33:55.243046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.240 [2024-07-26 18:33:55.243079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.240 [2024-07-26 18:33:55.243105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.240 [2024-07-26 18:33:55.243119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.240 [2024-07-26 18:33:55.243157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.240 qpair failed and we were unable to recover it. 00:33:29.240 [2024-07-26 18:33:55.252903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.240 [2024-07-26 18:33:55.253036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.240 [2024-07-26 18:33:55.253068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.240 [2024-07-26 18:33:55.253084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.240 [2024-07-26 18:33:55.253097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.240 [2024-07-26 18:33:55.253127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.240 qpair failed and we were unable to recover it. 00:33:29.240 [2024-07-26 18:33:55.262981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.240 [2024-07-26 18:33:55.263125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.240 [2024-07-26 18:33:55.263152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.240 [2024-07-26 18:33:55.263166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.240 [2024-07-26 18:33:55.263180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.240 [2024-07-26 18:33:55.263211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.240 qpair failed and we were unable to recover it. 00:33:29.240 [2024-07-26 18:33:55.273008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.240 [2024-07-26 18:33:55.273161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.240 [2024-07-26 18:33:55.273187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.240 [2024-07-26 18:33:55.273202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.240 [2024-07-26 18:33:55.273216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.240 [2024-07-26 18:33:55.273247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.240 qpair failed and we were unable to recover it. 00:33:29.240 [2024-07-26 18:33:55.283040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.240 [2024-07-26 18:33:55.283247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.240 [2024-07-26 18:33:55.283273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.240 [2024-07-26 18:33:55.283288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.240 [2024-07-26 18:33:55.283305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.240 [2024-07-26 18:33:55.283338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.240 qpair failed and we were unable to recover it. 00:33:29.240 [2024-07-26 18:33:55.293032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.240 [2024-07-26 18:33:55.293230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.240 [2024-07-26 18:33:55.293261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.240 [2024-07-26 18:33:55.293277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.240 [2024-07-26 18:33:55.293290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.240 [2024-07-26 18:33:55.293323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.240 qpair failed and we were unable to recover it. 00:33:29.240 [2024-07-26 18:33:55.303080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.240 [2024-07-26 18:33:55.303217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.240 [2024-07-26 18:33:55.303243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.241 [2024-07-26 18:33:55.303258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.241 [2024-07-26 18:33:55.303271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.241 [2024-07-26 18:33:55.303304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.241 qpair failed and we were unable to recover it. 00:33:29.241 [2024-07-26 18:33:55.313124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.241 [2024-07-26 18:33:55.313265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.241 [2024-07-26 18:33:55.313291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.241 [2024-07-26 18:33:55.313306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.241 [2024-07-26 18:33:55.313319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.241 [2024-07-26 18:33:55.313350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.241 qpair failed and we were unable to recover it. 00:33:29.241 [2024-07-26 18:33:55.323142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.241 [2024-07-26 18:33:55.323280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.241 [2024-07-26 18:33:55.323306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.241 [2024-07-26 18:33:55.323320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.241 [2024-07-26 18:33:55.323334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.241 [2024-07-26 18:33:55.323375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.241 qpair failed and we were unable to recover it. 00:33:29.241 [2024-07-26 18:33:55.333152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.241 [2024-07-26 18:33:55.333324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.241 [2024-07-26 18:33:55.333349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.241 [2024-07-26 18:33:55.333364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.241 [2024-07-26 18:33:55.333378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.241 [2024-07-26 18:33:55.333416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.241 qpair failed and we were unable to recover it. 00:33:29.241 [2024-07-26 18:33:55.343222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.241 [2024-07-26 18:33:55.343384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.241 [2024-07-26 18:33:55.343410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.241 [2024-07-26 18:33:55.343425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.241 [2024-07-26 18:33:55.343438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.241 [2024-07-26 18:33:55.343468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.241 qpair failed and we were unable to recover it. 00:33:29.241 [2024-07-26 18:33:55.353246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.241 [2024-07-26 18:33:55.353413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.241 [2024-07-26 18:33:55.353439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.241 [2024-07-26 18:33:55.353453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.241 [2024-07-26 18:33:55.353467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcf98000b90 00:33:29.241 [2024-07-26 18:33:55.353498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:29.241 qpair failed and we were unable to recover it. 00:33:29.241 [2024-07-26 18:33:55.353626] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:33:29.241 A controller has encountered a failure and is being reset. 00:33:29.241 [2024-07-26 18:33:55.353700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x968470 (9): Bad file descriptor 00:33:29.500 Controller properly reset. 00:33:29.500 Initializing NVMe Controllers 00:33:29.500 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:29.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:29.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:29.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:29.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:29.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:29.500 Initialization complete. Launching workers. 00:33:29.500 Starting thread on core 1 00:33:29.500 Starting thread on core 2 00:33:29.500 Starting thread on core 3 00:33:29.500 Starting thread on core 0 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:29.500 00:33:29.500 real 0m10.790s 00:33:29.500 user 0m18.555s 00:33:29.500 sys 0m5.384s 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:29.500 ************************************ 00:33:29.500 END TEST nvmf_target_disconnect_tc2 00:33:29.500 ************************************ 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:29.500 rmmod nvme_tcp 00:33:29.500 rmmod nvme_fabrics 00:33:29.500 rmmod nvme_keyring 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1620631 ']' 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1620631 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1620631 ']' 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1620631 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:29.500 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1620631 00:33:29.760 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:33:29.760 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:33:29.760 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1620631' 00:33:29.760 killing process with pid 1620631 00:33:29.760 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1620631 00:33:29.760 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1620631 00:33:30.021 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:30.021 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:30.021 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:30.021 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:30.021 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:30.021 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.021 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.021 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.931 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:31.931 00:33:31.931 real 0m15.508s 00:33:31.931 user 0m44.774s 00:33:31.931 sys 0m7.332s 00:33:31.932 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:31.932 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:31.932 ************************************ 00:33:31.932 END TEST nvmf_target_disconnect 00:33:31.932 ************************************ 00:33:31.932 18:33:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:31.932 00:33:31.932 real 6m31.675s 00:33:31.932 user 16m46.106s 00:33:31.932 sys 1m23.882s 00:33:31.932 18:33:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:31.932 18:33:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.932 ************************************ 00:33:31.932 END TEST nvmf_host 00:33:31.932 ************************************ 00:33:31.932 00:33:31.932 real 27m8.591s 00:33:31.932 user 73m52.194s 00:33:31.932 sys 6m27.395s 00:33:31.932 18:33:58 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:31.932 18:33:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:31.932 ************************************ 00:33:31.932 END TEST nvmf_tcp 00:33:31.932 ************************************ 00:33:31.932 18:33:58 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:33:31.932 18:33:58 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:31.932 18:33:58 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:31.932 18:33:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:31.932 18:33:58 -- common/autotest_common.sh@10 -- # set +x 00:33:31.932 ************************************ 00:33:31.932 START TEST spdkcli_nvmf_tcp 00:33:31.932 ************************************ 00:33:31.932 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:32.191 * Looking for test storage... 00:33:32.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:32.191 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1621825 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1621825 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1621825 ']' 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:32.192 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.192 [2024-07-26 18:33:58.174864] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:32.192 [2024-07-26 18:33:58.174941] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621825 ] 00:33:32.192 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.192 [2024-07-26 18:33:58.206024] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:32.192 [2024-07-26 18:33:58.232963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:32.192 [2024-07-26 18:33:58.322965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.192 [2024-07-26 18:33:58.322969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.451 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:32.451 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:33:32.451 18:33:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:32.451 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:32.451 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.451 18:33:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:32.451 18:33:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:32.451 18:33:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:32.451 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:32.451 18:33:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.451 18:33:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:32.451 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:32.451 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:32.451 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:32.451 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:32.451 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:32.451 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:32.451 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:32.451 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:32.451 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:32.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:32.451 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:32.451 ' 00:33:34.981 [2024-07-26 18:34:00.994231] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.356 [2024-07-26 18:34:02.218585] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:38.898 [2024-07-26 18:34:04.497684] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:40.806 [2024-07-26 18:34:06.456149] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:42.188 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:42.188 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:42.188 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:42.188 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:42.188 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:42.188 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:42.188 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:42.188 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:42.188 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:42.188 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:42.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:42.188 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:42.188 18:34:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:42.188 18:34:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:42.188 18:34:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:42.188 18:34:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:42.188 18:34:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:42.188 18:34:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:42.188 18:34:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:42.188 18:34:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:42.447 18:34:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:42.447 18:34:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:42.447 18:34:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:42.447 18:34:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:42.447 18:34:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:42.447 18:34:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:42.447 18:34:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:42.447 18:34:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:42.447 18:34:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:42.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:42.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:42.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:42.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:42.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:42.447 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:42.447 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:42.447 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:42.447 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:42.447 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:42.447 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:42.447 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:42.447 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:42.447 ' 00:33:47.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:47.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:47.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:47.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:47.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:47.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:47.726 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:47.726 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:47.726 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:47.726 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:47.726 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:47.726 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:47.726 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:47.726 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:47.726 18:34:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:47.726 18:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:47.726 18:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:47.726 18:34:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1621825 00:33:47.726 18:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1621825 ']' 00:33:47.726 18:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1621825 00:33:47.726 18:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:33:47.726 18:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:47.727 18:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1621825 00:33:47.727 18:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:47.727 18:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:47.727 18:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1621825' 00:33:47.727 killing process with pid 1621825 00:33:47.727 18:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1621825 00:33:47.727 18:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1621825 00:33:47.985 18:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:47.985 18:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:47.985 18:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1621825 ']' 00:33:47.985 18:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1621825 00:33:47.985 18:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1621825 ']' 00:33:47.985 18:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1621825 00:33:47.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1621825) - No such process 00:33:47.985 18:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1621825 is not found' 00:33:47.985 Process with pid 1621825 is not found 00:33:47.985 18:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:47.985 18:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:47.985 18:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:47.985 00:33:47.985 real 0m16.017s 00:33:47.985 user 0m33.909s 00:33:47.985 sys 0m0.787s 00:33:47.985 18:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:47.985 18:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:47.985 ************************************ 00:33:47.985 END TEST spdkcli_nvmf_tcp 00:33:47.985 ************************************ 00:33:47.985 18:34:14 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:47.985 18:34:14 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:47.985 18:34:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:47.985 18:34:14 -- common/autotest_common.sh@10 -- # set +x 00:33:47.985 ************************************ 00:33:47.985 START TEST nvmf_identify_passthru 00:33:47.985 ************************************ 00:33:47.985 18:34:14 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:48.243 * Looking for test storage... 00:33:48.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:48.243 18:34:14 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.243 18:34:14 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.243 18:34:14 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.243 18:34:14 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.243 18:34:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.243 18:34:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.243 18:34:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.243 18:34:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:48.243 18:34:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.243 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:48.244 18:34:14 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.244 18:34:14 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.244 18:34:14 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.244 18:34:14 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.244 18:34:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.244 18:34:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.244 18:34:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.244 18:34:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:48.244 18:34:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.244 18:34:14 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.244 18:34:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:48.244 18:34:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:48.244 18:34:14 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:33:48.244 18:34:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:50.148 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:50.148 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:50.148 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:50.148 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:50.148 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:50.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:50.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:33:50.149 00:33:50.149 --- 10.0.0.2 ping statistics --- 00:33:50.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.149 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:50.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:33:50.149 00:33:50.149 --- 10.0.0.1 ping statistics --- 00:33:50.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.149 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:50.149 18:34:16 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:50.149 18:34:16 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:50.149 18:34:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:33:50.149 18:34:16 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:33:50.149 18:34:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:33:50.149 18:34:16 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:33:50.149 18:34:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:50.149 18:34:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:50.149 18:34:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:50.409 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.605 18:34:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:33:54.605 18:34:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:54.605 18:34:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:54.605 18:34:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:54.605 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.800 18:34:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:58.800 18:34:24 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:58.800 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:58.800 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:58.800 18:34:24 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:58.800 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:58.800 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:58.800 18:34:24 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1626322 00:33:58.800 18:34:24 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:58.800 18:34:24 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:58.800 18:34:24 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1626322 00:33:58.800 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1626322 ']' 00:33:58.800 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.800 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:58.800 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:58.800 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:58.800 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:58.800 [2024-07-26 18:34:24.728732] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:58.800 [2024-07-26 18:34:24.728839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:58.800 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.800 [2024-07-26 18:34:24.771095] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:58.800 [2024-07-26 18:34:24.802015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:58.800 [2024-07-26 18:34:24.899919] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:58.800 [2024-07-26 18:34:24.899985] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:58.800 [2024-07-26 18:34:24.900002] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:58.800 [2024-07-26 18:34:24.900015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:58.800 [2024-07-26 18:34:24.900027] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:58.800 [2024-07-26 18:34:24.900094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.800 [2024-07-26 18:34:24.900485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:58.800 [2024-07-26 18:34:24.900591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:58.800 [2024-07-26 18:34:24.900594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.058 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:59.058 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:33:59.058 18:34:24 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:59.058 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.058 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.058 INFO: Log level set to 20 00:33:59.058 INFO: Requests: 00:33:59.058 { 00:33:59.058 "jsonrpc": "2.0", 00:33:59.058 "method": "nvmf_set_config", 00:33:59.058 "id": 1, 00:33:59.058 "params": { 00:33:59.058 "admin_cmd_passthru": { 00:33:59.058 "identify_ctrlr": true 00:33:59.058 } 00:33:59.058 } 00:33:59.058 } 00:33:59.058 00:33:59.058 INFO: response: 00:33:59.058 { 00:33:59.058 "jsonrpc": "2.0", 00:33:59.058 "id": 1, 00:33:59.058 "result": true 00:33:59.058 } 00:33:59.058 00:33:59.058 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.059 18:34:24 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:59.059 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.059 18:34:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.059 INFO: Setting log level to 20 00:33:59.059 INFO: Setting log level to 20 00:33:59.059 INFO: Log level set to 20 00:33:59.059 INFO: Log level set to 20 00:33:59.059 INFO: Requests: 00:33:59.059 { 00:33:59.059 "jsonrpc": "2.0", 00:33:59.059 "method": "framework_start_init", 00:33:59.059 "id": 1 00:33:59.059 } 00:33:59.059 00:33:59.059 INFO: Requests: 00:33:59.059 { 00:33:59.059 "jsonrpc": "2.0", 00:33:59.059 "method": "framework_start_init", 00:33:59.059 "id": 1 00:33:59.059 } 00:33:59.059 00:33:59.059 [2024-07-26 18:34:25.075457] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:59.059 INFO: response: 00:33:59.059 { 00:33:59.059 "jsonrpc": "2.0", 00:33:59.059 "id": 1, 00:33:59.059 "result": true 00:33:59.059 } 00:33:59.059 00:33:59.059 INFO: response: 00:33:59.059 { 00:33:59.059 "jsonrpc": "2.0", 00:33:59.059 "id": 1, 00:33:59.059 "result": true 00:33:59.059 } 00:33:59.059 00:33:59.059 18:34:25 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.059 18:34:25 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:59.059 18:34:25 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.059 18:34:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.059 INFO: Setting log level to 40 00:33:59.059 INFO: Setting log level to 40 00:33:59.059 INFO: Setting log level to 40 00:33:59.059 [2024-07-26 18:34:25.085650] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:59.059 18:34:25 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.059 18:34:25 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:59.059 18:34:25 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:59.059 18:34:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.059 18:34:25 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:33:59.059 18:34:25 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.059 18:34:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:02.343 Nvme0n1 00:34:02.343 18:34:27 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.343 18:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:02.343 18:34:27 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.343 18:34:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:02.343 18:34:27 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.343 18:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:02.343 18:34:27 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.343 18:34:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:02.343 18:34:27 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.343 18:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.343 18:34:27 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.343 18:34:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:02.343 [2024-07-26 18:34:27.975335] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.343 18:34:27 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.343 18:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:02.343 18:34:27 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.343 18:34:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:02.343 [ 00:34:02.343 { 00:34:02.343 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:02.343 "subtype": "Discovery", 00:34:02.343 "listen_addresses": [], 00:34:02.343 "allow_any_host": true, 00:34:02.343 "hosts": [] 00:34:02.343 }, 00:34:02.343 { 00:34:02.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:02.343 "subtype": "NVMe", 00:34:02.343 "listen_addresses": [ 00:34:02.343 { 00:34:02.343 "trtype": "TCP", 00:34:02.343 "adrfam": "IPv4", 00:34:02.343 "traddr": "10.0.0.2", 00:34:02.343 "trsvcid": "4420" 00:34:02.343 } 00:34:02.343 ], 00:34:02.343 "allow_any_host": true, 00:34:02.343 "hosts": [], 00:34:02.343 "serial_number": "SPDK00000000000001", 00:34:02.343 "model_number": "SPDK bdev Controller", 00:34:02.343 "max_namespaces": 1, 00:34:02.343 "min_cntlid": 1, 00:34:02.343 "max_cntlid": 65519, 00:34:02.343 "namespaces": [ 00:34:02.343 { 00:34:02.343 "nsid": 1, 00:34:02.343 "bdev_name": "Nvme0n1", 00:34:02.343 "name": "Nvme0n1", 00:34:02.343 "nguid": "764EE008497F4F32A67276944D3056EE", 00:34:02.343 "uuid": "764ee008-497f-4f32-a672-76944d3056ee" 00:34:02.343 } 00:34:02.343 ] 00:34:02.343 } 00:34:02.343 ] 00:34:02.343 18:34:27 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.343 18:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:02.343 18:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:02.343 18:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:02.343 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.343 18:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:02.343 18:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:02.343 18:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:02.343 18:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:02.343 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.343 18:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:02.343 18:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:02.343 18:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:02.343 18:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:02.343 18:34:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.343 18:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:02.343 18:34:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.343 18:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:02.343 18:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:02.343 18:34:28 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:02.343 18:34:28 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:02.343 18:34:28 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:02.343 18:34:28 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:02.343 18:34:28 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:02.343 18:34:28 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:02.343 rmmod nvme_tcp 00:34:02.343 rmmod nvme_fabrics 00:34:02.343 rmmod nvme_keyring 00:34:02.343 18:34:28 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:02.343 18:34:28 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:02.343 18:34:28 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:02.343 18:34:28 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1626322 ']' 00:34:02.343 18:34:28 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1626322 00:34:02.343 18:34:28 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1626322 ']' 00:34:02.343 18:34:28 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1626322 00:34:02.343 18:34:28 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:34:02.343 18:34:28 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:02.343 18:34:28 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1626322 00:34:02.603 18:34:28 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:02.603 18:34:28 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:02.603 18:34:28 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1626322' 00:34:02.603 killing process with pid 1626322 00:34:02.603 18:34:28 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1626322 00:34:02.603 18:34:28 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1626322 00:34:04.018 18:34:30 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:04.018 18:34:30 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:04.018 18:34:30 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:04.018 18:34:30 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:04.018 18:34:30 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:04.018 18:34:30 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.018 18:34:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:04.018 18:34:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.968 18:34:32 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:05.968 00:34:05.968 real 0m17.947s 00:34:05.968 user 0m26.931s 00:34:05.968 sys 0m2.233s 00:34:05.968 18:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:05.968 18:34:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:05.968 ************************************ 00:34:05.968 END TEST nvmf_identify_passthru 00:34:05.968 ************************************ 00:34:05.968 18:34:32 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:05.968 18:34:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:05.968 18:34:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:05.968 18:34:32 -- common/autotest_common.sh@10 -- # set +x 00:34:06.227 ************************************ 00:34:06.227 START TEST nvmf_dif 00:34:06.227 ************************************ 00:34:06.227 18:34:32 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:06.227 * Looking for test storage... 00:34:06.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:06.227 18:34:32 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.227 18:34:32 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.227 18:34:32 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.227 18:34:32 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.227 18:34:32 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.227 18:34:32 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.227 18:34:32 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.227 18:34:32 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:06.227 18:34:32 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:06.227 18:34:32 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:06.228 18:34:32 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:06.228 18:34:32 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:06.228 18:34:32 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:06.228 18:34:32 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:06.228 18:34:32 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:06.228 18:34:32 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:06.228 18:34:32 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.228 18:34:32 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:06.228 18:34:32 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:06.228 18:34:32 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:06.228 18:34:32 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.228 18:34:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:06.228 18:34:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.228 18:34:32 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:06.228 18:34:32 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:06.228 18:34:32 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:06.228 18:34:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:08.133 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:08.133 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.133 18:34:34 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:08.134 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:08.134 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:08.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:08.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:34:08.134 00:34:08.134 --- 10.0.0.2 ping statistics --- 00:34:08.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.134 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:08.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:08.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:34:08.134 00:34:08.134 --- 10.0.0.1 ping statistics --- 00:34:08.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.134 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:08.134 18:34:34 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:09.069 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:09.069 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:09.069 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:09.069 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:09.069 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:09.069 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:09.069 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:09.069 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:09.069 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:09.069 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:09.069 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:09.069 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:09.069 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:09.069 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:09.070 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:09.070 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:09.070 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:09.328 18:34:35 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:09.328 18:34:35 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:09.328 18:34:35 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:09.328 18:34:35 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:09.328 18:34:35 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:09.328 18:34:35 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:09.328 18:34:35 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:09.328 18:34:35 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:09.328 18:34:35 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:09.328 18:34:35 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:09.328 18:34:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:09.328 18:34:35 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1629489 00:34:09.328 18:34:35 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:09.328 18:34:35 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1629489 00:34:09.328 18:34:35 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1629489 ']' 00:34:09.328 18:34:35 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.328 18:34:35 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:09.328 18:34:35 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:09.328 18:34:35 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:09.328 18:34:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:09.328 [2024-07-26 18:34:35.453458] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:34:09.328 [2024-07-26 18:34:35.453527] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:09.588 EAL: No free 2048 kB hugepages reported on node 1 00:34:09.588 [2024-07-26 18:34:35.493489] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:09.588 [2024-07-26 18:34:35.521586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.588 [2024-07-26 18:34:35.610255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:09.588 [2024-07-26 18:34:35.610320] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:09.588 [2024-07-26 18:34:35.610333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:09.588 [2024-07-26 18:34:35.610345] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:09.588 [2024-07-26 18:34:35.610354] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:09.588 [2024-07-26 18:34:35.610381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.588 18:34:35 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:09.588 18:34:35 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:34:09.588 18:34:35 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:09.588 18:34:35 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:09.588 18:34:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:09.848 18:34:35 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.848 18:34:35 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:09.848 18:34:35 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:09.848 18:34:35 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.848 18:34:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:09.848 [2024-07-26 18:34:35.759159] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:09.848 18:34:35 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.848 18:34:35 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:09.848 18:34:35 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:09.848 18:34:35 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:09.848 18:34:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:09.848 ************************************ 00:34:09.848 START TEST fio_dif_1_default 00:34:09.848 ************************************ 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:09.848 bdev_null0 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:09.848 [2024-07-26 18:34:35.819454] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:09.848 { 00:34:09.848 "params": { 00:34:09.848 "name": "Nvme$subsystem", 00:34:09.848 "trtype": "$TEST_TRANSPORT", 00:34:09.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:09.848 "adrfam": "ipv4", 00:34:09.848 "trsvcid": "$NVMF_PORT", 00:34:09.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:09.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:09.848 "hdgst": ${hdgst:-false}, 00:34:09.848 "ddgst": ${ddgst:-false} 00:34:09.848 }, 00:34:09.848 "method": "bdev_nvme_attach_controller" 00:34:09.848 } 00:34:09.848 EOF 00:34:09.848 )") 00:34:09.848 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:09.849 "params": { 00:34:09.849 "name": "Nvme0", 00:34:09.849 "trtype": "tcp", 00:34:09.849 "traddr": "10.0.0.2", 00:34:09.849 "adrfam": "ipv4", 00:34:09.849 "trsvcid": "4420", 00:34:09.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:09.849 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:09.849 "hdgst": false, 00:34:09.849 "ddgst": false 00:34:09.849 }, 00:34:09.849 "method": "bdev_nvme_attach_controller" 00:34:09.849 }' 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:09.849 18:34:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.107 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:10.107 fio-3.35 00:34:10.107 Starting 1 thread 00:34:10.107 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.318 00:34:22.318 filename0: (groupid=0, jobs=1): err= 0: pid=1629694: Fri Jul 26 18:34:46 2024 00:34:22.318 read: IOPS=187, BW=750KiB/s (768kB/s)(7504KiB/10001msec) 00:34:22.318 slat (nsec): min=3963, max=37123, avg=9392.68, stdev=2638.16 00:34:22.318 clat (usec): min=767, max=48061, avg=21293.14, stdev=20123.80 00:34:22.318 lat (usec): min=775, max=48077, avg=21302.53, stdev=20123.72 00:34:22.318 clat percentiles (usec): 00:34:22.318 | 1.00th=[ 807], 5.00th=[ 824], 10.00th=[ 840], 20.00th=[ 857], 00:34:22.318 | 30.00th=[ 881], 40.00th=[ 914], 50.00th=[41157], 60.00th=[41157], 00:34:22.318 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:22.318 | 99.00th=[41681], 99.50th=[41681], 99.90th=[47973], 99.95th=[47973], 00:34:22.318 | 99.99th=[47973] 00:34:22.318 bw ( KiB/s): min= 640, max= 768, per=99.82%, avg=749.47, stdev=38.92, samples=19 00:34:22.318 iops : min= 160, max= 192, avg=187.37, stdev= 9.73, samples=19 00:34:22.318 lat (usec) : 1000=48.99% 00:34:22.318 lat (msec) : 2=0.27%, 50=50.75% 00:34:22.318 cpu : usr=89.65%, sys=10.07%, ctx=16, majf=0, minf=236 00:34:22.318 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:22.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.318 issued rwts: total=1876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.318 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:22.318 00:34:22.318 Run status group 0 (all jobs): 00:34:22.318 READ: bw=750KiB/s (768kB/s), 750KiB/s-750KiB/s (768kB/s-768kB/s), io=7504KiB (7684kB), run=10001-10001msec 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.318 00:34:22.318 real 0m11.032s 00:34:22.318 user 0m10.021s 00:34:22.318 sys 0m1.251s 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:22.318 ************************************ 00:34:22.318 END TEST fio_dif_1_default 00:34:22.318 ************************************ 00:34:22.318 18:34:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:22.318 18:34:46 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:22.318 18:34:46 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:22.318 18:34:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.318 ************************************ 00:34:22.318 START TEST fio_dif_1_multi_subsystems 00:34:22.318 ************************************ 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:22.318 bdev_null0 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:22.318 [2024-07-26 18:34:46.900285] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:22.318 bdev_null1 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:22.318 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:22.319 { 00:34:22.319 "params": { 00:34:22.319 "name": "Nvme$subsystem", 00:34:22.319 "trtype": "$TEST_TRANSPORT", 00:34:22.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:22.319 "adrfam": "ipv4", 00:34:22.319 "trsvcid": "$NVMF_PORT", 00:34:22.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:22.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:22.319 "hdgst": ${hdgst:-false}, 00:34:22.319 "ddgst": ${ddgst:-false} 00:34:22.319 }, 00:34:22.319 "method": "bdev_nvme_attach_controller" 00:34:22.319 } 00:34:22.319 EOF 00:34:22.319 )") 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:22.319 { 00:34:22.319 "params": { 00:34:22.319 "name": "Nvme$subsystem", 00:34:22.319 "trtype": "$TEST_TRANSPORT", 00:34:22.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:22.319 "adrfam": "ipv4", 00:34:22.319 "trsvcid": "$NVMF_PORT", 00:34:22.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:22.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:22.319 "hdgst": ${hdgst:-false}, 00:34:22.319 "ddgst": ${ddgst:-false} 00:34:22.319 }, 00:34:22.319 "method": "bdev_nvme_attach_controller" 00:34:22.319 } 00:34:22.319 EOF 00:34:22.319 )") 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:22.319 "params": { 00:34:22.319 "name": "Nvme0", 00:34:22.319 "trtype": "tcp", 00:34:22.319 "traddr": "10.0.0.2", 00:34:22.319 "adrfam": "ipv4", 00:34:22.319 "trsvcid": "4420", 00:34:22.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:22.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:22.319 "hdgst": false, 00:34:22.319 "ddgst": false 00:34:22.319 }, 00:34:22.319 "method": "bdev_nvme_attach_controller" 00:34:22.319 },{ 00:34:22.319 "params": { 00:34:22.319 "name": "Nvme1", 00:34:22.319 "trtype": "tcp", 00:34:22.319 "traddr": "10.0.0.2", 00:34:22.319 "adrfam": "ipv4", 00:34:22.319 "trsvcid": "4420", 00:34:22.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:22.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:22.319 "hdgst": false, 00:34:22.319 "ddgst": false 00:34:22.319 }, 00:34:22.319 "method": "bdev_nvme_attach_controller" 00:34:22.319 }' 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:22.319 18:34:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.319 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:22.319 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:22.319 fio-3.35 00:34:22.319 Starting 2 threads 00:34:22.319 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.283 00:34:32.283 filename0: (groupid=0, jobs=1): err= 0: pid=1631092: Fri Jul 26 18:34:57 2024 00:34:32.283 read: IOPS=187, BW=749KiB/s (767kB/s)(7504KiB/10018msec) 00:34:32.283 slat (nsec): min=6827, max=85237, avg=9709.04, stdev=4822.42 00:34:32.283 clat (usec): min=869, max=44017, avg=21328.67, stdev=20303.05 00:34:32.283 lat (usec): min=876, max=44047, avg=21338.38, stdev=20303.17 00:34:32.283 clat percentiles (usec): 00:34:32.283 | 1.00th=[ 898], 5.00th=[ 930], 10.00th=[ 955], 20.00th=[ 979], 00:34:32.283 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[40633], 60.00th=[41157], 00:34:32.283 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:32.283 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:34:32.283 | 99.99th=[43779] 00:34:32.283 bw ( KiB/s): min= 704, max= 768, per=65.69%, avg=748.80, stdev=28.24, samples=20 00:34:32.283 iops : min= 176, max= 192, avg=187.20, stdev= 7.06, samples=20 00:34:32.283 lat (usec) : 1000=34.17% 00:34:32.283 lat (msec) : 2=15.72%, 50=50.11% 00:34:32.283 cpu : usr=93.28%, sys=6.42%, ctx=13, majf=0, minf=212 00:34:32.283 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:32.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.283 issued rwts: total=1876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.283 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:32.283 filename1: (groupid=0, jobs=1): err= 0: pid=1631093: Fri Jul 26 18:34:57 2024 00:34:32.283 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10018msec) 00:34:32.283 slat (nsec): min=6904, max=27313, avg=8910.83, stdev=2921.82 00:34:32.283 clat (usec): min=40876, max=44006, avg=41029.62, stdev=278.68 00:34:32.283 lat (usec): min=40883, max=44032, avg=41038.53, stdev=279.17 00:34:32.283 clat percentiles (usec): 00:34:32.283 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:32.283 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:32.283 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:32.283 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:34:32.283 | 99.99th=[43779] 00:34:32.283 bw ( KiB/s): min= 384, max= 416, per=34.07%, avg=388.80, stdev=11.72, samples=20 00:34:32.283 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:32.283 lat (msec) : 50=100.00% 00:34:32.283 cpu : usr=94.18%, sys=5.53%, ctx=12, majf=0, minf=113 00:34:32.283 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:32.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.283 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.283 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:32.283 00:34:32.283 Run status group 0 (all jobs): 00:34:32.283 READ: bw=1139KiB/s (1166kB/s), 390KiB/s-749KiB/s (399kB/s-767kB/s), io=11.1MiB (11.7MB), run=10018-10018msec 00:34:32.283 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:32.283 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:32.283 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:32.283 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:32.283 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:32.283 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:32.283 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.283 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:32.283 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.283 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:32.283 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.283 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:32.283 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.284 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:32.284 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:32.284 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:32.284 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:32.284 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.284 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:32.284 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.284 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:32.284 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.284 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:32.284 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.284 00:34:32.284 real 0m11.368s 00:34:32.284 user 0m20.039s 00:34:32.284 sys 0m1.510s 00:34:32.284 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:32.284 18:34:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:32.284 ************************************ 00:34:32.284 END TEST fio_dif_1_multi_subsystems 00:34:32.284 ************************************ 00:34:32.284 18:34:58 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:32.284 18:34:58 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:32.284 18:34:58 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:32.284 18:34:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:32.284 ************************************ 00:34:32.284 START TEST fio_dif_rand_params 00:34:32.284 ************************************ 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.284 bdev_null0 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.284 [2024-07-26 18:34:58.312301] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:32.284 { 00:34:32.284 "params": { 00:34:32.284 "name": "Nvme$subsystem", 00:34:32.284 "trtype": "$TEST_TRANSPORT", 00:34:32.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:32.284 "adrfam": "ipv4", 00:34:32.284 "trsvcid": "$NVMF_PORT", 00:34:32.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:32.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:32.284 "hdgst": ${hdgst:-false}, 00:34:32.284 "ddgst": ${ddgst:-false} 00:34:32.284 }, 00:34:32.284 "method": "bdev_nvme_attach_controller" 00:34:32.284 } 00:34:32.284 EOF 00:34:32.284 )") 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:32.284 "params": { 00:34:32.284 "name": "Nvme0", 00:34:32.284 "trtype": "tcp", 00:34:32.284 "traddr": "10.0.0.2", 00:34:32.284 "adrfam": "ipv4", 00:34:32.284 "trsvcid": "4420", 00:34:32.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:32.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:32.284 "hdgst": false, 00:34:32.284 "ddgst": false 00:34:32.284 }, 00:34:32.284 "method": "bdev_nvme_attach_controller" 00:34:32.284 }' 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:32.284 18:34:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:32.542 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:32.542 ... 00:34:32.542 fio-3.35 00:34:32.542 Starting 3 threads 00:34:32.542 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.106 00:34:39.106 filename0: (groupid=0, jobs=1): err= 0: pid=1632480: Fri Jul 26 18:35:04 2024 00:34:39.106 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(152MiB/5003msec) 00:34:39.106 slat (nsec): min=5251, max=34480, avg=12722.06, stdev=2119.51 00:34:39.106 clat (usec): min=5149, max=56602, avg=12302.82, stdev=11100.15 00:34:39.106 lat (usec): min=5163, max=56620, avg=12315.54, stdev=11100.11 00:34:39.106 clat percentiles (usec): 00:34:39.106 | 1.00th=[ 5604], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7504], 00:34:39.106 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[10159], 00:34:39.106 | 70.00th=[10945], 80.00th=[11731], 90.00th=[13042], 95.00th=[49546], 00:34:39.106 | 99.00th=[52691], 99.50th=[54264], 99.90th=[56361], 99.95th=[56361], 00:34:39.106 | 99.99th=[56361] 00:34:39.106 bw ( KiB/s): min=17628, max=35584, per=38.79%, avg=31126.00, stdev=5876.99, samples=10 00:34:39.106 iops : min= 137, max= 278, avg=243.10, stdev=46.10, samples=10 00:34:39.106 lat (msec) : 10=58.21%, 20=34.40%, 50=3.20%, 100=4.19% 00:34:39.106 cpu : usr=92.68%, sys=6.86%, ctx=10, majf=0, minf=79 00:34:39.106 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:39.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.106 issued rwts: total=1218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:39.106 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:39.106 filename0: (groupid=0, jobs=1): err= 0: pid=1632481: Fri Jul 26 18:35:04 2024 00:34:39.106 read: IOPS=192, BW=24.0MiB/s (25.2MB/s)(121MiB/5027msec) 00:34:39.106 slat (nsec): min=4727, max=28110, avg=14602.89, stdev=3013.18 00:34:39.106 clat (usec): min=5506, max=92217, avg=15569.09, stdev=14370.96 00:34:39.106 lat (usec): min=5518, max=92235, avg=15583.69, stdev=14371.24 00:34:39.107 clat percentiles (usec): 00:34:39.107 | 1.00th=[ 5866], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 8029], 00:34:39.107 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10683], 60.00th=[11600], 00:34:39.107 | 70.00th=[12256], 80.00th=[13566], 90.00th=[49546], 95.00th=[51643], 00:34:39.107 | 99.00th=[53740], 99.50th=[54264], 99.90th=[91751], 99.95th=[91751], 00:34:39.107 | 99.99th=[91751] 00:34:39.107 bw ( KiB/s): min=16128, max=40016, per=30.76%, avg=24686.40, stdev=6963.82, samples=10 00:34:39.107 iops : min= 126, max= 312, avg=192.80, stdev=54.25, samples=10 00:34:39.107 lat (msec) : 10=44.88%, 20=41.57%, 50=4.96%, 100=8.58% 00:34:39.107 cpu : usr=92.90%, sys=6.55%, ctx=8, majf=0, minf=81 00:34:39.107 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:39.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.107 issued rwts: total=967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:39.107 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:39.107 filename0: (groupid=0, jobs=1): err= 0: pid=1632482: Fri Jul 26 18:35:04 2024 00:34:39.107 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(122MiB/5044msec) 00:34:39.107 slat (nsec): min=5129, max=29897, avg=12796.04, stdev=1788.51 00:34:39.107 clat (usec): min=5055, max=95149, avg=15384.80, stdev=13346.05 00:34:39.107 lat (usec): min=5068, max=95162, avg=15397.59, stdev=13346.00 00:34:39.107 clat percentiles (usec): 00:34:39.107 | 1.00th=[ 5669], 5.00th=[ 6128], 10.00th=[ 7111], 20.00th=[ 8979], 00:34:39.107 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11863], 60.00th=[12780], 00:34:39.107 | 70.00th=[13698], 80.00th=[14877], 90.00th=[17171], 95.00th=[52691], 00:34:39.107 | 99.00th=[55837], 99.50th=[90702], 99.90th=[94897], 99.95th=[94897], 00:34:39.107 | 99.99th=[94897] 00:34:39.107 bw ( KiB/s): min=15872, max=31744, per=31.07%, avg=24934.40, stdev=5441.56, samples=10 00:34:39.107 iops : min= 124, max= 248, avg=194.80, stdev=42.51, samples=10 00:34:39.107 lat (msec) : 10=30.50%, 20=60.08%, 50=0.82%, 100=8.60% 00:34:39.107 cpu : usr=92.62%, sys=6.82%, ctx=7, majf=0, minf=120 00:34:39.107 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:39.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.107 issued rwts: total=977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:39.107 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:39.107 00:34:39.107 Run status group 0 (all jobs): 00:34:39.107 READ: bw=78.4MiB/s (82.2MB/s), 24.0MiB/s-30.4MiB/s (25.2MB/s-31.9MB/s), io=395MiB (414MB), run=5003-5044msec 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.107 bdev_null0 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.107 [2024-07-26 18:35:04.334827] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.107 bdev_null1 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.107 bdev_null2 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.107 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:39.108 { 00:34:39.108 "params": { 00:34:39.108 "name": "Nvme$subsystem", 00:34:39.108 "trtype": "$TEST_TRANSPORT", 00:34:39.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:39.108 "adrfam": "ipv4", 00:34:39.108 "trsvcid": "$NVMF_PORT", 00:34:39.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:39.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:39.108 "hdgst": ${hdgst:-false}, 00:34:39.108 "ddgst": ${ddgst:-false} 00:34:39.108 }, 00:34:39.108 "method": "bdev_nvme_attach_controller" 00:34:39.108 } 00:34:39.108 EOF 00:34:39.108 )") 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:39.108 { 00:34:39.108 "params": { 00:34:39.108 "name": "Nvme$subsystem", 00:34:39.108 "trtype": "$TEST_TRANSPORT", 00:34:39.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:39.108 "adrfam": "ipv4", 00:34:39.108 "trsvcid": "$NVMF_PORT", 00:34:39.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:39.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:39.108 "hdgst": ${hdgst:-false}, 00:34:39.108 "ddgst": ${ddgst:-false} 00:34:39.108 }, 00:34:39.108 "method": "bdev_nvme_attach_controller" 00:34:39.108 } 00:34:39.108 EOF 00:34:39.108 )") 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:39.108 { 00:34:39.108 "params": { 00:34:39.108 "name": "Nvme$subsystem", 00:34:39.108 "trtype": "$TEST_TRANSPORT", 00:34:39.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:39.108 "adrfam": "ipv4", 00:34:39.108 "trsvcid": "$NVMF_PORT", 00:34:39.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:39.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:39.108 "hdgst": ${hdgst:-false}, 00:34:39.108 "ddgst": ${ddgst:-false} 00:34:39.108 }, 00:34:39.108 "method": "bdev_nvme_attach_controller" 00:34:39.108 } 00:34:39.108 EOF 00:34:39.108 )") 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:39.108 "params": { 00:34:39.108 "name": "Nvme0", 00:34:39.108 "trtype": "tcp", 00:34:39.108 "traddr": "10.0.0.2", 00:34:39.108 "adrfam": "ipv4", 00:34:39.108 "trsvcid": "4420", 00:34:39.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:39.108 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:39.108 "hdgst": false, 00:34:39.108 "ddgst": false 00:34:39.108 }, 00:34:39.108 "method": "bdev_nvme_attach_controller" 00:34:39.108 },{ 00:34:39.108 "params": { 00:34:39.108 "name": "Nvme1", 00:34:39.108 "trtype": "tcp", 00:34:39.108 "traddr": "10.0.0.2", 00:34:39.108 "adrfam": "ipv4", 00:34:39.108 "trsvcid": "4420", 00:34:39.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:39.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:39.108 "hdgst": false, 00:34:39.108 "ddgst": false 00:34:39.108 }, 00:34:39.108 "method": "bdev_nvme_attach_controller" 00:34:39.108 },{ 00:34:39.108 "params": { 00:34:39.108 "name": "Nvme2", 00:34:39.108 "trtype": "tcp", 00:34:39.108 "traddr": "10.0.0.2", 00:34:39.108 "adrfam": "ipv4", 00:34:39.108 "trsvcid": "4420", 00:34:39.108 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:39.108 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:39.108 "hdgst": false, 00:34:39.108 "ddgst": false 00:34:39.108 }, 00:34:39.108 "method": "bdev_nvme_attach_controller" 00:34:39.108 }' 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:39.108 18:35:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:39.108 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:39.108 ... 00:34:39.108 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:39.108 ... 00:34:39.108 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:39.108 ... 00:34:39.108 fio-3.35 00:34:39.108 Starting 24 threads 00:34:39.108 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.417 00:34:51.417 filename0: (groupid=0, jobs=1): err= 0: pid=1633347: Fri Jul 26 18:35:15 2024 00:34:51.417 read: IOPS=94, BW=380KiB/s (389kB/s)(3856KiB/10149msec) 00:34:51.417 slat (usec): min=5, max=117, avg=14.34, stdev=11.43 00:34:51.417 clat (msec): min=57, max=261, avg=168.07, stdev=24.84 00:34:51.417 lat (msec): min=57, max=261, avg=168.08, stdev=24.84 00:34:51.417 clat percentiles (msec): 00:34:51.417 | 1.00th=[ 58], 5.00th=[ 140], 10.00th=[ 155], 20.00th=[ 163], 00:34:51.417 | 30.00th=[ 167], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 169], 00:34:51.417 | 70.00th=[ 169], 80.00th=[ 171], 90.00th=[ 194], 95.00th=[ 211], 00:34:51.417 | 99.00th=[ 255], 99.50th=[ 259], 99.90th=[ 262], 99.95th=[ 262], 00:34:51.417 | 99.99th=[ 262] 00:34:51.417 bw ( KiB/s): min= 272, max= 512, per=5.84%, avg=379.20, stdev=41.88, samples=20 00:34:51.417 iops : min= 68, max= 128, avg=94.80, stdev=10.47, samples=20 00:34:51.417 lat (msec) : 100=3.32%, 250=95.44%, 500=1.24% 00:34:51.417 cpu : usr=97.14%, sys=1.93%, ctx=91, majf=0, minf=11 00:34:51.417 IO depths : 1=0.8%, 2=6.8%, 4=24.3%, 8=56.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:34:51.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.417 complete : 0=0.0%, 4=94.2%, 8=0.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.417 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.417 filename0: (groupid=0, jobs=1): err= 0: pid=1633348: Fri Jul 26 18:35:15 2024 00:34:51.417 read: IOPS=62, BW=249KiB/s (255kB/s)(2496KiB/10018msec) 00:34:51.417 slat (usec): min=20, max=182, avg=68.32, stdev=13.56 00:34:51.417 clat (msec): min=150, max=398, avg=256.32, stdev=39.20 00:34:51.417 lat (msec): min=150, max=398, avg=256.39, stdev=39.20 00:34:51.417 clat percentiles (msec): 00:34:51.417 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 239], 20.00th=[ 249], 00:34:51.417 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 255], 00:34:51.417 | 70.00th=[ 257], 80.00th=[ 262], 90.00th=[ 279], 95.00th=[ 338], 00:34:51.417 | 99.00th=[ 397], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:34:51.417 | 99.99th=[ 401] 00:34:51.417 bw ( KiB/s): min= 128, max= 256, per=3.75%, avg=243.20, stdev=39.40, samples=20 00:34:51.417 iops : min= 32, max= 64, avg=60.80, stdev= 9.85, samples=20 00:34:51.417 lat (msec) : 250=42.47%, 500=57.53% 00:34:51.417 cpu : usr=97.26%, sys=1.77%, ctx=43, majf=0, minf=9 00:34:51.417 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:34:51.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.417 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.417 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.417 filename0: (groupid=0, jobs=1): err= 0: pid=1633349: Fri Jul 26 18:35:15 2024 00:34:51.417 read: IOPS=67, BW=271KiB/s (278kB/s)(2752KiB/10150msec) 00:34:51.417 slat (usec): min=6, max=110, avg=58.62, stdev=18.39 00:34:51.417 clat (msec): min=58, max=282, avg=235.53, stdev=48.77 00:34:51.417 lat (msec): min=58, max=282, avg=235.59, stdev=48.78 00:34:51.417 clat percentiles (msec): 00:34:51.417 | 1.00th=[ 58], 5.00th=[ 140], 10.00th=[ 150], 20.00th=[ 243], 00:34:51.417 | 30.00th=[ 249], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:34:51.417 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 264], 95.00th=[ 279], 00:34:51.417 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 284], 00:34:51.417 | 99.99th=[ 284] 00:34:51.417 bw ( KiB/s): min= 256, max= 512, per=4.13%, avg=268.80, stdev=57.24, samples=20 00:34:51.417 iops : min= 64, max= 128, avg=67.20, stdev=14.31, samples=20 00:34:51.417 lat (msec) : 100=4.65%, 250=48.26%, 500=47.09% 00:34:51.417 cpu : usr=97.85%, sys=1.63%, ctx=32, majf=0, minf=9 00:34:51.417 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:51.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.417 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.417 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.417 filename0: (groupid=0, jobs=1): err= 0: pid=1633350: Fri Jul 26 18:35:15 2024 00:34:51.417 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10120msec) 00:34:51.417 slat (usec): min=6, max=184, avg=32.23, stdev=21.56 00:34:51.417 clat (msec): min=129, max=366, avg=252.75, stdev=31.54 00:34:51.417 lat (msec): min=129, max=366, avg=252.78, stdev=31.54 00:34:51.417 clat percentiles (msec): 00:34:51.417 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 222], 20.00th=[ 249], 00:34:51.417 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 251], 60.00th=[ 255], 00:34:51.417 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 275], 95.00th=[ 330], 00:34:51.417 | 99.00th=[ 342], 99.50th=[ 363], 99.90th=[ 368], 99.95th=[ 368], 00:34:51.417 | 99.99th=[ 368] 00:34:51.417 bw ( KiB/s): min= 128, max= 256, per=3.84%, avg=249.60, stdev=28.62, samples=20 00:34:51.417 iops : min= 32, max= 64, avg=62.40, stdev= 7.16, samples=20 00:34:51.417 lat (msec) : 250=40.00%, 500=60.00% 00:34:51.417 cpu : usr=95.26%, sys=2.74%, ctx=265, majf=0, minf=9 00:34:51.417 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:34:51.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.417 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.417 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.417 filename0: (groupid=0, jobs=1): err= 0: pid=1633351: Fri Jul 26 18:35:15 2024 00:34:51.417 read: IOPS=67, BW=270KiB/s (277kB/s)(2744KiB/10149msec) 00:34:51.417 slat (nsec): min=8920, max=99056, avg=46452.88, stdev=23726.76 00:34:51.417 clat (msec): min=58, max=340, avg=236.08, stdev=53.27 00:34:51.417 lat (msec): min=58, max=340, avg=236.13, stdev=53.28 00:34:51.417 clat percentiles (msec): 00:34:51.417 | 1.00th=[ 59], 5.00th=[ 150], 10.00th=[ 161], 20.00th=[ 211], 00:34:51.417 | 30.00th=[ 249], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:34:51.417 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 279], 95.00th=[ 284], 00:34:51.417 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 342], 99.95th=[ 342], 00:34:51.417 | 99.99th=[ 342] 00:34:51.417 bw ( KiB/s): min= 144, max= 384, per=4.13%, avg=268.00, stdev=53.92, samples=20 00:34:51.417 iops : min= 36, max= 96, avg=67.00, stdev=13.48, samples=20 00:34:51.417 lat (msec) : 100=4.66%, 250=42.71%, 500=52.62% 00:34:51.417 cpu : usr=98.25%, sys=1.34%, ctx=14, majf=0, minf=9 00:34:51.417 IO depths : 1=3.2%, 2=9.5%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:34:51.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.417 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.417 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.417 filename0: (groupid=0, jobs=1): err= 0: pid=1633352: Fri Jul 26 18:35:15 2024 00:34:51.417 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10105msec) 00:34:51.417 slat (nsec): min=11930, max=63283, avg=28061.23, stdev=8704.97 00:34:51.417 clat (msec): min=151, max=396, avg=256.88, stdev=39.37 00:34:51.417 lat (msec): min=151, max=396, avg=256.91, stdev=39.37 00:34:51.417 clat percentiles (msec): 00:34:51.417 | 1.00th=[ 161], 5.00th=[ 171], 10.00th=[ 239], 20.00th=[ 249], 00:34:51.417 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 255], 00:34:51.417 | 70.00th=[ 257], 80.00th=[ 262], 90.00th=[ 279], 95.00th=[ 338], 00:34:51.417 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:34:51.417 | 99.99th=[ 397] 00:34:51.418 bw ( KiB/s): min= 128, max= 272, per=3.75%, avg=243.20, stdev=37.29, samples=20 00:34:51.418 iops : min= 32, max= 68, avg=60.80, stdev= 9.32, samples=20 00:34:51.418 lat (msec) : 250=38.78%, 500=61.22% 00:34:51.418 cpu : usr=97.99%, sys=1.41%, ctx=62, majf=0, minf=9 00:34:51.418 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:34:51.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.418 filename0: (groupid=0, jobs=1): err= 0: pid=1633353: Fri Jul 26 18:35:15 2024 00:34:51.418 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10128msec) 00:34:51.418 slat (usec): min=6, max=160, avg=27.16, stdev=13.56 00:34:51.418 clat (msec): min=153, max=351, avg=246.84, stdev=24.43 00:34:51.418 lat (msec): min=153, max=351, avg=246.86, stdev=24.43 00:34:51.418 clat percentiles (msec): 00:34:51.418 | 1.00th=[ 155], 5.00th=[ 199], 10.00th=[ 215], 20.00th=[ 249], 00:34:51.418 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 251], 60.00th=[ 253], 00:34:51.418 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 262], 95.00th=[ 271], 00:34:51.418 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 351], 99.95th=[ 351], 00:34:51.418 | 99.99th=[ 351] 00:34:51.418 bw ( KiB/s): min= 144, max= 384, per=3.95%, avg=256.00, stdev=39.19, samples=20 00:34:51.418 iops : min= 36, max= 96, avg=64.00, stdev= 9.80, samples=20 00:34:51.418 lat (msec) : 250=43.14%, 500=56.86% 00:34:51.418 cpu : usr=96.33%, sys=2.25%, ctx=33, majf=0, minf=9 00:34:51.418 IO depths : 1=0.5%, 2=6.7%, 4=25.0%, 8=55.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:34:51.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.418 filename0: (groupid=0, jobs=1): err= 0: pid=1633354: Fri Jul 26 18:35:15 2024 00:34:51.418 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10126msec) 00:34:51.418 slat (nsec): min=8316, max=61277, avg=15974.51, stdev=8196.67 00:34:51.418 clat (msec): min=162, max=366, avg=252.75, stdev=16.42 00:34:51.418 lat (msec): min=162, max=366, avg=252.76, stdev=16.42 00:34:51.418 clat percentiles (msec): 00:34:51.418 | 1.00th=[ 211], 5.00th=[ 222], 10.00th=[ 243], 20.00th=[ 249], 00:34:51.418 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 251], 60.00th=[ 255], 00:34:51.418 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 264], 95.00th=[ 284], 00:34:51.418 | 99.00th=[ 284], 99.50th=[ 363], 99.90th=[ 368], 99.95th=[ 368], 00:34:51.418 | 99.99th=[ 368] 00:34:51.418 bw ( KiB/s): min= 128, max= 256, per=3.84%, avg=249.60, stdev=28.62, samples=20 00:34:51.418 iops : min= 32, max= 64, avg=62.40, stdev= 7.16, samples=20 00:34:51.418 lat (msec) : 250=37.81%, 500=62.19% 00:34:51.418 cpu : usr=96.03%, sys=2.45%, ctx=53, majf=0, minf=9 00:34:51.418 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:51.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.418 filename1: (groupid=0, jobs=1): err= 0: pid=1633355: Fri Jul 26 18:35:15 2024 00:34:51.418 read: IOPS=63, BW=252KiB/s (258kB/s)(2552KiB/10120msec) 00:34:51.418 slat (usec): min=13, max=165, avg=32.08, stdev=16.72 00:34:51.418 clat (msec): min=140, max=400, avg=253.21, stdev=36.89 00:34:51.418 lat (msec): min=140, max=400, avg=253.24, stdev=36.88 00:34:51.418 clat percentiles (msec): 00:34:51.418 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 211], 20.00th=[ 247], 00:34:51.418 | 30.00th=[ 249], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:34:51.418 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 284], 95.00th=[ 338], 00:34:51.418 | 99.00th=[ 342], 99.50th=[ 351], 99.90th=[ 401], 99.95th=[ 401], 00:34:51.418 | 99.99th=[ 401] 00:34:51.418 bw ( KiB/s): min= 128, max= 256, per=3.82%, avg=248.80, stdev=28.66, samples=20 00:34:51.418 iops : min= 32, max= 64, avg=62.20, stdev= 7.16, samples=20 00:34:51.418 lat (msec) : 250=46.08%, 500=53.92% 00:34:51.418 cpu : usr=97.27%, sys=1.76%, ctx=37, majf=0, minf=9 00:34:51.418 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:34:51.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 issued rwts: total=638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.418 filename1: (groupid=0, jobs=1): err= 0: pid=1633356: Fri Jul 26 18:35:15 2024 00:34:51.418 read: IOPS=92, BW=368KiB/s (377kB/s)(3736KiB/10149msec) 00:34:51.418 slat (nsec): min=6847, max=99607, avg=27462.15, stdev=18498.07 00:34:51.418 clat (msec): min=57, max=322, avg=172.33, stdev=32.34 00:34:51.418 lat (msec): min=57, max=322, avg=172.36, stdev=32.35 00:34:51.418 clat percentiles (msec): 00:34:51.418 | 1.00th=[ 58], 5.00th=[ 140], 10.00th=[ 153], 20.00th=[ 167], 00:34:51.418 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 169], 60.00th=[ 169], 00:34:51.418 | 70.00th=[ 169], 80.00th=[ 171], 90.00th=[ 211], 95.00th=[ 249], 00:34:51.418 | 99.00th=[ 266], 99.50th=[ 266], 99.90th=[ 321], 99.95th=[ 321], 00:34:51.418 | 99.99th=[ 321] 00:34:51.418 bw ( KiB/s): min= 256, max= 513, per=5.66%, avg=367.25, stdev=57.58, samples=20 00:34:51.418 iops : min= 64, max= 128, avg=91.80, stdev=14.36, samples=20 00:34:51.418 lat (msec) : 100=3.43%, 250=94.65%, 500=1.93% 00:34:51.418 cpu : usr=98.08%, sys=1.48%, ctx=26, majf=0, minf=9 00:34:51.418 IO depths : 1=5.1%, 2=11.2%, 4=24.5%, 8=51.7%, 16=7.4%, 32=0.0%, >=64=0.0% 00:34:51.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 issued rwts: total=934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.418 filename1: (groupid=0, jobs=1): err= 0: pid=1633357: Fri Jul 26 18:35:15 2024 00:34:51.418 read: IOPS=64, BW=258KiB/s (264kB/s)(2616KiB/10132msec) 00:34:51.418 slat (nsec): min=7061, max=58394, avg=29097.38, stdev=8133.43 00:34:51.418 clat (msec): min=130, max=348, avg=247.32, stdev=38.00 00:34:51.418 lat (msec): min=130, max=348, avg=247.35, stdev=38.01 00:34:51.418 clat percentiles (msec): 00:34:51.418 | 1.00th=[ 131], 5.00th=[ 165], 10.00th=[ 197], 20.00th=[ 243], 00:34:51.418 | 30.00th=[ 249], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:34:51.418 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 279], 95.00th=[ 326], 00:34:51.418 | 99.00th=[ 338], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 351], 00:34:51.418 | 99.99th=[ 351] 00:34:51.418 bw ( KiB/s): min= 240, max= 256, per=3.93%, avg=255.20, stdev= 3.58, samples=20 00:34:51.418 iops : min= 60, max= 64, avg=63.80, stdev= 0.89, samples=20 00:34:51.418 lat (msec) : 250=49.08%, 500=50.92% 00:34:51.418 cpu : usr=97.70%, sys=1.71%, ctx=14, majf=0, minf=9 00:34:51.418 IO depths : 1=3.2%, 2=9.5%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:34:51.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.418 filename1: (groupid=0, jobs=1): err= 0: pid=1633358: Fri Jul 26 18:35:15 2024 00:34:51.418 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10119msec) 00:34:51.418 slat (nsec): min=3948, max=90433, avg=60346.64, stdev=15696.44 00:34:51.418 clat (msec): min=161, max=343, avg=252.44, stdev=24.03 00:34:51.418 lat (msec): min=161, max=343, avg=252.50, stdev=24.03 00:34:51.418 clat percentiles (msec): 00:34:51.418 | 1.00th=[ 163], 5.00th=[ 197], 10.00th=[ 243], 20.00th=[ 247], 00:34:51.418 | 30.00th=[ 249], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:34:51.418 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 266], 95.00th=[ 279], 00:34:51.418 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 342], 00:34:51.418 | 99.99th=[ 342] 00:34:51.418 bw ( KiB/s): min= 128, max= 256, per=3.84%, avg=249.60, stdev=28.62, samples=20 00:34:51.418 iops : min= 32, max= 64, avg=62.40, stdev= 7.16, samples=20 00:34:51.418 lat (msec) : 250=47.03%, 500=52.97% 00:34:51.418 cpu : usr=97.16%, sys=1.85%, ctx=37, majf=0, minf=9 00:34:51.418 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:51.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.418 filename1: (groupid=0, jobs=1): err= 0: pid=1633359: Fri Jul 26 18:35:15 2024 00:34:51.418 read: IOPS=66, BW=267KiB/s (274kB/s)(2688KiB/10053msec) 00:34:51.418 slat (usec): min=4, max=179, avg=66.63, stdev=25.91 00:34:51.418 clat (msec): min=61, max=281, avg=238.78, stdev=45.09 00:34:51.418 lat (msec): min=61, max=281, avg=238.85, stdev=45.10 00:34:51.418 clat percentiles (msec): 00:34:51.418 | 1.00th=[ 62], 5.00th=[ 140], 10.00th=[ 218], 20.00th=[ 243], 00:34:51.418 | 30.00th=[ 249], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:34:51.418 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 264], 95.00th=[ 275], 00:34:51.418 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 284], 00:34:51.418 | 99.99th=[ 284] 00:34:51.418 bw ( KiB/s): min= 128, max= 512, per=4.04%, avg=262.40, stdev=65.33, samples=20 00:34:51.418 iops : min= 32, max= 128, avg=65.60, stdev=16.33, samples=20 00:34:51.418 lat (msec) : 100=4.76%, 250=41.37%, 500=53.87% 00:34:51.418 cpu : usr=96.51%, sys=2.20%, ctx=32, majf=0, minf=9 00:34:51.418 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:51.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.418 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.419 filename1: (groupid=0, jobs=1): err= 0: pid=1633360: Fri Jul 26 18:35:15 2024 00:34:51.419 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10128msec) 00:34:51.419 slat (nsec): min=8294, max=92628, avg=24969.58, stdev=7720.29 00:34:51.419 clat (msec): min=161, max=342, avg=246.80, stdev=27.80 00:34:51.419 lat (msec): min=161, max=342, avg=246.82, stdev=27.80 00:34:51.419 clat percentiles (msec): 00:34:51.419 | 1.00th=[ 163], 5.00th=[ 171], 10.00th=[ 211], 20.00th=[ 249], 00:34:51.419 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 251], 60.00th=[ 253], 00:34:51.419 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 264], 95.00th=[ 271], 00:34:51.419 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 342], 99.95th=[ 342], 00:34:51.419 | 99.99th=[ 342] 00:34:51.419 bw ( KiB/s): min= 128, max= 384, per=3.95%, avg=256.00, stdev=41.53, samples=20 00:34:51.419 iops : min= 32, max= 96, avg=64.00, stdev=10.38, samples=20 00:34:51.419 lat (msec) : 250=42.38%, 500=57.62% 00:34:51.419 cpu : usr=97.12%, sys=1.97%, ctx=14, majf=0, minf=9 00:34:51.419 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:34:51.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.419 filename1: (groupid=0, jobs=1): err= 0: pid=1633361: Fri Jul 26 18:35:15 2024 00:34:51.419 read: IOPS=62, BW=249KiB/s (255kB/s)(2496KiB/10019msec) 00:34:51.419 slat (nsec): min=6015, max=65989, avg=28924.55, stdev=9379.97 00:34:51.419 clat (msec): min=146, max=471, avg=256.64, stdev=27.47 00:34:51.419 lat (msec): min=146, max=471, avg=256.67, stdev=27.47 00:34:51.419 clat percentiles (msec): 00:34:51.419 | 1.00th=[ 220], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:34:51.419 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 255], 00:34:51.419 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 275], 95.00th=[ 279], 00:34:51.419 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 472], 99.95th=[ 472], 00:34:51.419 | 99.99th=[ 472] 00:34:51.419 bw ( KiB/s): min= 128, max= 256, per=3.75%, avg=243.20, stdev=39.40, samples=20 00:34:51.419 iops : min= 32, max= 64, avg=60.80, stdev= 9.85, samples=20 00:34:51.419 lat (msec) : 250=35.42%, 500=64.58% 00:34:51.419 cpu : usr=97.97%, sys=1.67%, ctx=13, majf=0, minf=9 00:34:51.419 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:51.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.419 filename1: (groupid=0, jobs=1): err= 0: pid=1633362: Fri Jul 26 18:35:15 2024 00:34:51.419 read: IOPS=63, BW=255KiB/s (261kB/s)(2560KiB/10033msec) 00:34:51.419 slat (usec): min=13, max=103, avg=64.35, stdev=14.51 00:34:51.419 clat (msec): min=140, max=390, avg=250.27, stdev=31.66 00:34:51.419 lat (msec): min=140, max=390, avg=250.34, stdev=31.66 00:34:51.419 clat percentiles (msec): 00:34:51.419 | 1.00th=[ 161], 5.00th=[ 171], 10.00th=[ 220], 20.00th=[ 243], 00:34:51.419 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 251], 60.00th=[ 255], 00:34:51.419 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 275], 95.00th=[ 292], 00:34:51.419 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 393], 99.95th=[ 393], 00:34:51.419 | 99.99th=[ 393] 00:34:51.419 bw ( KiB/s): min= 128, max= 256, per=3.84%, avg=249.60, stdev=28.62, samples=20 00:34:51.419 iops : min= 32, max= 64, avg=62.40, stdev= 7.16, samples=20 00:34:51.419 lat (msec) : 250=42.03%, 500=57.97% 00:34:51.419 cpu : usr=97.27%, sys=1.74%, ctx=110, majf=0, minf=9 00:34:51.419 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:34:51.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.419 filename2: (groupid=0, jobs=1): err= 0: pid=1633363: Fri Jul 26 18:35:15 2024 00:34:51.419 read: IOPS=94, BW=377KiB/s (386kB/s)(3824KiB/10138msec) 00:34:51.419 slat (usec): min=9, max=232, avg=36.22, stdev=23.87 00:34:51.419 clat (msec): min=57, max=278, avg=168.23, stdev=32.17 00:34:51.419 lat (msec): min=57, max=278, avg=168.27, stdev=32.18 00:34:51.419 clat percentiles (msec): 00:34:51.419 | 1.00th=[ 58], 5.00th=[ 121], 10.00th=[ 140], 20.00th=[ 163], 00:34:51.419 | 30.00th=[ 167], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 169], 00:34:51.419 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 201], 95.00th=[ 243], 00:34:51.419 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 279], 00:34:51.419 | 99.99th=[ 279] 00:34:51.419 bw ( KiB/s): min= 272, max= 512, per=5.80%, avg=376.00, stdev=44.20, samples=20 00:34:51.419 iops : min= 68, max= 128, avg=94.00, stdev=11.05, samples=20 00:34:51.419 lat (msec) : 100=3.35%, 250=94.56%, 500=2.09% 00:34:51.419 cpu : usr=96.19%, sys=2.36%, ctx=43, majf=0, minf=9 00:34:51.419 IO depths : 1=0.5%, 2=5.3%, 4=20.3%, 8=61.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:34:51.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 complete : 0=0.0%, 4=92.8%, 8=2.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.419 filename2: (groupid=0, jobs=1): err= 0: pid=1633364: Fri Jul 26 18:35:15 2024 00:34:51.419 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10132msec) 00:34:51.419 slat (nsec): min=8968, max=67973, avg=32552.61, stdev=10382.08 00:34:51.419 clat (msec): min=130, max=282, avg=246.78, stdev=26.64 00:34:51.419 lat (msec): min=130, max=282, avg=246.82, stdev=26.64 00:34:51.419 clat percentiles (msec): 00:34:51.419 | 1.00th=[ 131], 5.00th=[ 197], 10.00th=[ 236], 20.00th=[ 247], 00:34:51.419 | 30.00th=[ 249], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:34:51.419 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 264], 95.00th=[ 279], 00:34:51.419 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 284], 00:34:51.419 | 99.99th=[ 284] 00:34:51.419 bw ( KiB/s): min= 256, max= 256, per=3.95%, avg=256.00, stdev= 0.00, samples=20 00:34:51.419 iops : min= 64, max= 64, avg=64.00, stdev= 0.00, samples=20 00:34:51.419 lat (msec) : 250=49.85%, 500=50.15% 00:34:51.419 cpu : usr=97.21%, sys=1.89%, ctx=15, majf=0, minf=9 00:34:51.419 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:51.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.419 filename2: (groupid=0, jobs=1): err= 0: pid=1633365: Fri Jul 26 18:35:15 2024 00:34:51.419 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10132msec) 00:34:51.419 slat (nsec): min=8511, max=69794, avg=25433.17, stdev=8446.59 00:34:51.419 clat (msec): min=130, max=340, avg=246.89, stdev=37.45 00:34:51.419 lat (msec): min=130, max=340, avg=246.92, stdev=37.46 00:34:51.419 clat percentiles (msec): 00:34:51.419 | 1.00th=[ 131], 5.00th=[ 163], 10.00th=[ 197], 20.00th=[ 243], 00:34:51.419 | 30.00th=[ 249], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:34:51.419 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 279], 95.00th=[ 321], 00:34:51.419 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 342], 99.95th=[ 342], 00:34:51.419 | 99.99th=[ 342] 00:34:51.419 bw ( KiB/s): min= 256, max= 256, per=3.95%, avg=256.00, stdev= 0.00, samples=20 00:34:51.419 iops : min= 64, max= 64, avg=64.00, stdev= 0.00, samples=20 00:34:51.419 lat (msec) : 250=49.54%, 500=50.46% 00:34:51.419 cpu : usr=98.01%, sys=1.66%, ctx=18, majf=0, minf=9 00:34:51.419 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:34:51.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.419 filename2: (groupid=0, jobs=1): err= 0: pid=1633366: Fri Jul 26 18:35:15 2024 00:34:51.419 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10132msec) 00:34:51.419 slat (usec): min=17, max=109, avg=66.96, stdev=12.25 00:34:51.419 clat (msec): min=153, max=381, avg=252.48, stdev=31.31 00:34:51.419 lat (msec): min=153, max=381, avg=252.54, stdev=31.31 00:34:51.419 clat percentiles (msec): 00:34:51.419 | 1.00th=[ 167], 5.00th=[ 171], 10.00th=[ 222], 20.00th=[ 249], 00:34:51.419 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 251], 60.00th=[ 255], 00:34:51.419 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 279], 95.00th=[ 330], 00:34:51.419 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 380], 99.95th=[ 380], 00:34:51.419 | 99.99th=[ 380] 00:34:51.419 bw ( KiB/s): min= 144, max= 256, per=3.84%, avg=249.60, stdev=25.11, samples=20 00:34:51.419 iops : min= 36, max= 64, avg=62.40, stdev= 6.28, samples=20 00:34:51.419 lat (msec) : 250=40.94%, 500=59.06% 00:34:51.419 cpu : usr=97.89%, sys=1.55%, ctx=36, majf=0, minf=9 00:34:51.419 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:34:51.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.419 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.419 filename2: (groupid=0, jobs=1): err= 0: pid=1633367: Fri Jul 26 18:35:15 2024 00:34:51.419 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10116msec) 00:34:51.419 slat (nsec): min=6484, max=78274, avg=28509.64, stdev=9786.12 00:34:51.419 clat (msec): min=161, max=348, avg=252.66, stdev=35.19 00:34:51.419 lat (msec): min=161, max=348, avg=252.69, stdev=35.19 00:34:51.419 clat percentiles (msec): 00:34:51.419 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 211], 20.00th=[ 247], 00:34:51.419 | 30.00th=[ 249], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:34:51.419 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 279], 95.00th=[ 334], 00:34:51.419 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 351], 00:34:51.419 | 99.99th=[ 351] 00:34:51.419 bw ( KiB/s): min= 128, max= 256, per=3.84%, avg=249.60, stdev=28.62, samples=20 00:34:51.419 iops : min= 32, max= 64, avg=62.40, stdev= 7.16, samples=20 00:34:51.420 lat (msec) : 250=45.78%, 500=54.22% 00:34:51.420 cpu : usr=97.87%, sys=1.70%, ctx=25, majf=0, minf=9 00:34:51.420 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:34:51.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.420 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.420 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.420 filename2: (groupid=0, jobs=1): err= 0: pid=1633368: Fri Jul 26 18:35:15 2024 00:34:51.420 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10123msec) 00:34:51.420 slat (usec): min=5, max=250, avg=35.13, stdev=35.69 00:34:51.420 clat (msec): min=165, max=381, avg=252.57, stdev=29.58 00:34:51.420 lat (msec): min=165, max=381, avg=252.61, stdev=29.57 00:34:51.420 clat percentiles (msec): 00:34:51.420 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 222], 20.00th=[ 249], 00:34:51.420 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 251], 60.00th=[ 255], 00:34:51.420 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 275], 95.00th=[ 284], 00:34:51.420 | 99.00th=[ 342], 99.50th=[ 363], 99.90th=[ 384], 99.95th=[ 384], 00:34:51.420 | 99.99th=[ 384] 00:34:51.420 bw ( KiB/s): min= 144, max= 272, per=3.84%, avg=249.60, stdev=25.64, samples=20 00:34:51.420 iops : min= 36, max= 68, avg=62.40, stdev= 6.41, samples=20 00:34:51.420 lat (msec) : 250=39.53%, 500=60.47% 00:34:51.420 cpu : usr=97.34%, sys=1.75%, ctx=40, majf=0, minf=9 00:34:51.420 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:34:51.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.420 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.420 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.420 filename2: (groupid=0, jobs=1): err= 0: pid=1633369: Fri Jul 26 18:35:15 2024 00:34:51.420 read: IOPS=66, BW=267KiB/s (274kB/s)(2688KiB/10050msec) 00:34:51.420 slat (usec): min=12, max=160, avg=64.01, stdev=16.76 00:34:51.420 clat (msec): min=57, max=390, avg=238.78, stdev=53.60 00:34:51.420 lat (msec): min=57, max=390, avg=238.84, stdev=53.61 00:34:51.420 clat percentiles (msec): 00:34:51.420 | 1.00th=[ 58], 5.00th=[ 140], 10.00th=[ 159], 20.00th=[ 241], 00:34:51.420 | 30.00th=[ 249], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:34:51.420 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 275], 95.00th=[ 330], 00:34:51.420 | 99.00th=[ 338], 99.50th=[ 384], 99.90th=[ 393], 99.95th=[ 393], 00:34:51.420 | 99.99th=[ 393] 00:34:51.420 bw ( KiB/s): min= 128, max= 496, per=4.04%, avg=262.40, stdev=62.16, samples=20 00:34:51.420 iops : min= 32, max= 124, avg=65.60, stdev=15.54, samples=20 00:34:51.420 lat (msec) : 100=4.76%, 250=43.15%, 500=52.08% 00:34:51.420 cpu : usr=97.08%, sys=1.99%, ctx=36, majf=0, minf=9 00:34:51.420 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:34:51.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.420 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.420 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.420 filename2: (groupid=0, jobs=1): err= 0: pid=1633370: Fri Jul 26 18:35:15 2024 00:34:51.420 read: IOPS=62, BW=249KiB/s (255kB/s)(2496KiB/10016msec) 00:34:51.420 slat (nsec): min=9303, max=71072, avg=28911.54, stdev=9996.89 00:34:51.420 clat (msec): min=217, max=396, avg=256.53, stdev=24.92 00:34:51.420 lat (msec): min=217, max=396, avg=256.56, stdev=24.92 00:34:51.420 clat percentiles (msec): 00:34:51.420 | 1.00th=[ 218], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:34:51.420 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 255], 00:34:51.420 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 275], 95.00th=[ 279], 00:34:51.420 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:34:51.420 | 99.99th=[ 397] 00:34:51.420 bw ( KiB/s): min= 128, max= 256, per=3.75%, avg=243.20, stdev=39.40, samples=20 00:34:51.420 iops : min= 32, max= 64, avg=60.80, stdev= 9.85, samples=20 00:34:51.420 lat (msec) : 250=39.26%, 500=60.74% 00:34:51.420 cpu : usr=97.90%, sys=1.66%, ctx=34, majf=0, minf=9 00:34:51.420 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:51.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.420 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.420 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.420 00:34:51.420 Run status group 0 (all jobs): 00:34:51.420 READ: bw=6488KiB/s (6644kB/s), 247KiB/s-380KiB/s (253kB/s-389kB/s), io=64.3MiB (67.4MB), run=10016-10150msec 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.420 bdev_null0 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.420 [2024-07-26 18:35:16.099541] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:51.420 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.421 bdev_null1 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:51.421 { 00:34:51.421 "params": { 00:34:51.421 "name": "Nvme$subsystem", 00:34:51.421 "trtype": "$TEST_TRANSPORT", 00:34:51.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:51.421 "adrfam": "ipv4", 00:34:51.421 "trsvcid": "$NVMF_PORT", 00:34:51.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:51.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:51.421 "hdgst": ${hdgst:-false}, 00:34:51.421 "ddgst": ${ddgst:-false} 00:34:51.421 }, 00:34:51.421 "method": "bdev_nvme_attach_controller" 00:34:51.421 } 00:34:51.421 EOF 00:34:51.421 )") 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:51.421 { 00:34:51.421 "params": { 00:34:51.421 "name": "Nvme$subsystem", 00:34:51.421 "trtype": "$TEST_TRANSPORT", 00:34:51.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:51.421 "adrfam": "ipv4", 00:34:51.421 "trsvcid": "$NVMF_PORT", 00:34:51.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:51.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:51.421 "hdgst": ${hdgst:-false}, 00:34:51.421 "ddgst": ${ddgst:-false} 00:34:51.421 }, 00:34:51.421 "method": "bdev_nvme_attach_controller" 00:34:51.421 } 00:34:51.421 EOF 00:34:51.421 )") 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:51.421 "params": { 00:34:51.421 "name": "Nvme0", 00:34:51.421 "trtype": "tcp", 00:34:51.421 "traddr": "10.0.0.2", 00:34:51.421 "adrfam": "ipv4", 00:34:51.421 "trsvcid": "4420", 00:34:51.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:51.421 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:51.421 "hdgst": false, 00:34:51.421 "ddgst": false 00:34:51.421 }, 00:34:51.421 "method": "bdev_nvme_attach_controller" 00:34:51.421 },{ 00:34:51.421 "params": { 00:34:51.421 "name": "Nvme1", 00:34:51.421 "trtype": "tcp", 00:34:51.421 "traddr": "10.0.0.2", 00:34:51.421 "adrfam": "ipv4", 00:34:51.421 "trsvcid": "4420", 00:34:51.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:51.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:51.421 "hdgst": false, 00:34:51.421 "ddgst": false 00:34:51.421 }, 00:34:51.421 "method": "bdev_nvme_attach_controller" 00:34:51.421 }' 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:51.421 18:35:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.421 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:51.421 ... 00:34:51.421 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:51.421 ... 00:34:51.421 fio-3.35 00:34:51.421 Starting 4 threads 00:34:51.421 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.687 00:34:56.687 filename0: (groupid=0, jobs=1): err= 0: pid=1634751: Fri Jul 26 18:35:22 2024 00:34:56.687 read: IOPS=1949, BW=15.2MiB/s (16.0MB/s)(76.2MiB/5001msec) 00:34:56.687 slat (nsec): min=3782, max=33625, avg=10764.51, stdev=3531.84 00:34:56.687 clat (usec): min=1044, max=10447, avg=4067.69, stdev=566.50 00:34:56.687 lat (usec): min=1058, max=10459, avg=4078.45, stdev=566.42 00:34:56.687 clat percentiles (usec): 00:34:56.687 | 1.00th=[ 2999], 5.00th=[ 3425], 10.00th=[ 3556], 20.00th=[ 3720], 00:34:56.687 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4047], 00:34:56.687 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 5276], 00:34:56.687 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 7242], 99.95th=[ 7504], 00:34:56.687 | 99.99th=[10421] 00:34:56.687 bw ( KiB/s): min=15104, max=15904, per=24.91%, avg=15587.56, stdev=227.37, samples=9 00:34:56.687 iops : min= 1888, max= 1988, avg=1948.44, stdev=28.42, samples=9 00:34:56.687 lat (msec) : 2=0.01%, 4=48.35%, 10=51.63%, 20=0.01% 00:34:56.687 cpu : usr=89.82%, sys=7.60%, ctx=387, majf=0, minf=32 00:34:56.687 IO depths : 1=0.2%, 2=3.6%, 4=69.4%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.687 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.687 issued rwts: total=9751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.687 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:56.687 filename0: (groupid=0, jobs=1): err= 0: pid=1634752: Fri Jul 26 18:35:22 2024 00:34:56.687 read: IOPS=1931, BW=15.1MiB/s (15.8MB/s)(75.5MiB/5003msec) 00:34:56.687 slat (nsec): min=3675, max=32136, avg=13957.40, stdev=3714.20 00:34:56.687 clat (usec): min=1355, max=10920, avg=4103.54, stdev=630.75 00:34:56.687 lat (usec): min=1372, max=10936, avg=4117.50, stdev=630.69 00:34:56.687 clat percentiles (usec): 00:34:56.687 | 1.00th=[ 2999], 5.00th=[ 3392], 10.00th=[ 3556], 20.00th=[ 3720], 00:34:56.687 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4047], 00:34:56.687 | 70.00th=[ 4080], 80.00th=[ 4359], 90.00th=[ 4948], 95.00th=[ 5538], 00:34:56.687 | 99.00th=[ 6194], 99.50th=[ 6456], 99.90th=[ 6915], 99.95th=[ 9634], 00:34:56.687 | 99.99th=[10945] 00:34:56.687 bw ( KiB/s): min=14880, max=15952, per=24.68%, avg=15444.80, stdev=333.07, samples=10 00:34:56.687 iops : min= 1860, max= 1994, avg=1930.60, stdev=41.63, samples=10 00:34:56.687 lat (msec) : 2=0.01%, 4=48.88%, 10=51.10%, 20=0.01% 00:34:56.687 cpu : usr=93.38%, sys=6.06%, ctx=6, majf=0, minf=19 00:34:56.687 IO depths : 1=0.1%, 2=1.4%, 4=68.3%, 8=30.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.687 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.687 issued rwts: total=9661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.687 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:56.687 filename1: (groupid=0, jobs=1): err= 0: pid=1634753: Fri Jul 26 18:35:22 2024 00:34:56.687 read: IOPS=1999, BW=15.6MiB/s (16.4MB/s)(78.2MiB/5003msec) 00:34:56.687 slat (usec): min=3, max=314, avg=12.77, stdev= 5.59 00:34:56.687 clat (usec): min=1318, max=9342, avg=3960.46, stdev=591.72 00:34:56.687 lat (usec): min=1327, max=9354, avg=3973.23, stdev=591.71 00:34:56.687 clat percentiles (usec): 00:34:56.687 | 1.00th=[ 2671], 5.00th=[ 3032], 10.00th=[ 3294], 20.00th=[ 3589], 00:34:56.687 | 30.00th=[ 3752], 40.00th=[ 3851], 50.00th=[ 3982], 60.00th=[ 4015], 00:34:56.687 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 5014], 00:34:56.687 | 99.00th=[ 5866], 99.50th=[ 6194], 99.90th=[ 7177], 99.95th=[ 9110], 00:34:56.687 | 99.99th=[ 9372] 00:34:56.687 bw ( KiB/s): min=15344, max=16400, per=25.56%, avg=15993.60, stdev=360.08, samples=10 00:34:56.687 iops : min= 1918, max= 2050, avg=1999.20, stdev=45.01, samples=10 00:34:56.687 lat (msec) : 2=0.02%, 4=54.73%, 10=45.25% 00:34:56.687 cpu : usr=92.46%, sys=6.34%, ctx=15, majf=0, minf=54 00:34:56.687 IO depths : 1=0.2%, 2=3.5%, 4=68.5%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.687 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.687 issued rwts: total=10004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.687 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:56.687 filename1: (groupid=0, jobs=1): err= 0: pid=1634754: Fri Jul 26 18:35:22 2024 00:34:56.687 read: IOPS=1942, BW=15.2MiB/s (15.9MB/s)(75.9MiB/5001msec) 00:34:56.687 slat (nsec): min=3697, max=37399, avg=11006.24, stdev=3293.66 00:34:56.687 clat (usec): min=1322, max=8054, avg=4084.10, stdev=620.85 00:34:56.687 lat (usec): min=1335, max=8065, avg=4095.11, stdev=620.66 00:34:56.687 clat percentiles (usec): 00:34:56.687 | 1.00th=[ 2868], 5.00th=[ 3294], 10.00th=[ 3490], 20.00th=[ 3720], 00:34:56.687 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4047], 00:34:56.687 | 70.00th=[ 4080], 80.00th=[ 4359], 90.00th=[ 4817], 95.00th=[ 5473], 00:34:56.687 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 7111], 99.95th=[ 7832], 00:34:56.687 | 99.99th=[ 8029] 00:34:56.687 bw ( KiB/s): min=15184, max=16048, per=24.84%, avg=15541.33, stdev=308.29, samples=9 00:34:56.687 iops : min= 1898, max= 2006, avg=1942.67, stdev=38.54, samples=9 00:34:56.687 lat (msec) : 2=0.02%, 4=47.17%, 10=52.81% 00:34:56.687 cpu : usr=94.28%, sys=5.22%, ctx=17, majf=0, minf=55 00:34:56.687 IO depths : 1=0.2%, 2=3.4%, 4=67.9%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.687 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.687 issued rwts: total=9716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.687 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:56.687 00:34:56.687 Run status group 0 (all jobs): 00:34:56.687 READ: bw=61.1MiB/s (64.1MB/s), 15.1MiB/s-15.6MiB/s (15.8MB/s-16.4MB/s), io=306MiB (321MB), run=5001-5003msec 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.687 00:34:56.687 real 0m24.262s 00:34:56.687 user 4m32.903s 00:34:56.687 sys 0m7.689s 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:56.687 18:35:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.687 ************************************ 00:34:56.687 END TEST fio_dif_rand_params 00:34:56.687 ************************************ 00:34:56.688 18:35:22 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:56.688 18:35:22 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:56.688 18:35:22 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:56.688 18:35:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:56.688 ************************************ 00:34:56.688 START TEST fio_dif_digest 00:34:56.688 ************************************ 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:56.688 bdev_null0 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:56.688 [2024-07-26 18:35:22.617891] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:56.688 { 00:34:56.688 "params": { 00:34:56.688 "name": "Nvme$subsystem", 00:34:56.688 "trtype": "$TEST_TRANSPORT", 00:34:56.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.688 "adrfam": "ipv4", 00:34:56.688 "trsvcid": "$NVMF_PORT", 00:34:56.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.688 "hdgst": ${hdgst:-false}, 00:34:56.688 "ddgst": ${ddgst:-false} 00:34:56.688 }, 00:34:56.688 "method": "bdev_nvme_attach_controller" 00:34:56.688 } 00:34:56.688 EOF 00:34:56.688 )") 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:56.688 "params": { 00:34:56.688 "name": "Nvme0", 00:34:56.688 "trtype": "tcp", 00:34:56.688 "traddr": "10.0.0.2", 00:34:56.688 "adrfam": "ipv4", 00:34:56.688 "trsvcid": "4420", 00:34:56.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:56.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:56.688 "hdgst": true, 00:34:56.688 "ddgst": true 00:34:56.688 }, 00:34:56.688 "method": "bdev_nvme_attach_controller" 00:34:56.688 }' 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:56.688 18:35:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.947 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:56.947 ... 00:34:56.947 fio-3.35 00:34:56.947 Starting 3 threads 00:34:56.947 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.161 00:35:09.161 filename0: (groupid=0, jobs=1): err= 0: pid=1635625: Fri Jul 26 18:35:33 2024 00:35:09.161 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(268MiB/10047msec) 00:35:09.161 slat (nsec): min=4435, max=51671, avg=14427.27, stdev=2217.88 00:35:09.161 clat (usec): min=8513, max=56563, avg=13999.88, stdev=3155.80 00:35:09.161 lat (usec): min=8530, max=56577, avg=14014.31, stdev=3155.79 00:35:09.161 clat percentiles (usec): 00:35:09.161 | 1.00th=[ 9634], 5.00th=[11863], 10.00th=[12387], 20.00th=[13042], 00:35:09.161 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:35:09.161 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15139], 95.00th=[15664], 00:35:09.161 | 99.00th=[17171], 99.50th=[47449], 99.90th=[56361], 99.95th=[56361], 00:35:09.161 | 99.99th=[56361] 00:35:09.161 bw ( KiB/s): min=24576, max=29184, per=34.49%, avg=27456.00, stdev=1123.20, samples=20 00:35:09.161 iops : min= 192, max= 228, avg=214.50, stdev= 8.77, samples=20 00:35:09.161 lat (msec) : 10=1.54%, 20=97.90%, 50=0.09%, 100=0.47% 00:35:09.161 cpu : usr=92.03%, sys=7.44%, ctx=25, majf=0, minf=160 00:35:09.161 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.162 issued rwts: total=2147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.162 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:09.162 filename0: (groupid=0, jobs=1): err= 0: pid=1635626: Fri Jul 26 18:35:33 2024 00:35:09.162 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(252MiB/10046msec) 00:35:09.162 slat (nsec): min=4948, max=32655, avg=15088.86, stdev=2321.92 00:35:09.162 clat (usec): min=8935, max=58304, avg=14938.42, stdev=3293.79 00:35:09.162 lat (usec): min=8949, max=58319, avg=14953.50, stdev=3293.76 00:35:09.162 clat percentiles (usec): 00:35:09.162 | 1.00th=[10028], 5.00th=[12649], 10.00th=[13304], 20.00th=[13829], 00:35:09.162 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15139], 00:35:09.162 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16188], 95.00th=[16581], 00:35:09.162 | 99.00th=[17695], 99.50th=[49021], 99.90th=[57410], 99.95th=[57934], 00:35:09.162 | 99.99th=[58459] 00:35:09.162 bw ( KiB/s): min=23296, max=27648, per=32.32%, avg=25730.50, stdev=1115.03, samples=20 00:35:09.162 iops : min= 182, max= 216, avg=201.00, stdev= 8.72, samples=20 00:35:09.162 lat (msec) : 10=1.14%, 20=98.26%, 50=0.10%, 100=0.50% 00:35:09.162 cpu : usr=91.63%, sys=7.41%, ctx=136, majf=0, minf=121 00:35:09.162 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:09.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.162 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.162 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:09.162 filename0: (groupid=0, jobs=1): err= 0: pid=1635627: Fri Jul 26 18:35:33 2024 00:35:09.162 read: IOPS=207, BW=26.0MiB/s (27.3MB/s)(261MiB/10045msec) 00:35:09.162 slat (nsec): min=4911, max=50315, avg=17005.41, stdev=2801.23 00:35:09.162 clat (usec): min=8080, max=56454, avg=14382.74, stdev=2784.78 00:35:09.162 lat (usec): min=8094, max=56469, avg=14399.75, stdev=2784.85 00:35:09.162 clat percentiles (usec): 00:35:09.162 | 1.00th=[ 9896], 5.00th=[12125], 10.00th=[12911], 20.00th=[13435], 00:35:09.162 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:35:09.162 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:35:09.162 | 99.00th=[17171], 99.50th=[18220], 99.90th=[55837], 99.95th=[56361], 00:35:09.162 | 99.99th=[56361] 00:35:09.162 bw ( KiB/s): min=25088, max=27904, per=33.56%, avg=26713.60, stdev=720.62, samples=20 00:35:09.162 iops : min= 196, max= 218, avg=208.70, stdev= 5.63, samples=20 00:35:09.162 lat (msec) : 10=1.29%, 20=98.32%, 50=0.05%, 100=0.34% 00:35:09.162 cpu : usr=91.65%, sys=7.79%, ctx=15, majf=0, minf=188 00:35:09.162 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:09.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.162 issued rwts: total=2089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.162 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:09.162 00:35:09.162 Run status group 0 (all jobs): 00:35:09.162 READ: bw=77.7MiB/s (81.5MB/s), 25.0MiB/s-26.7MiB/s (26.2MB/s-28.0MB/s), io=781MiB (819MB), run=10045-10047msec 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.162 00:35:09.162 real 0m11.080s 00:35:09.162 user 0m28.778s 00:35:09.162 sys 0m2.531s 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:09.162 18:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 ************************************ 00:35:09.162 END TEST fio_dif_digest 00:35:09.162 ************************************ 00:35:09.162 18:35:33 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:09.162 18:35:33 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:09.162 18:35:33 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:09.162 18:35:33 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:09.162 18:35:33 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:09.162 18:35:33 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:09.162 18:35:33 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:09.162 18:35:33 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:09.162 rmmod nvme_tcp 00:35:09.162 rmmod nvme_fabrics 00:35:09.162 rmmod nvme_keyring 00:35:09.162 18:35:33 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:09.162 18:35:33 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:09.162 18:35:33 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:09.162 18:35:33 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1629489 ']' 00:35:09.162 18:35:33 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1629489 00:35:09.162 18:35:33 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1629489 ']' 00:35:09.162 18:35:33 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1629489 00:35:09.162 18:35:33 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:35:09.162 18:35:33 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:09.162 18:35:33 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1629489 00:35:09.162 18:35:33 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:09.162 18:35:33 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:09.162 18:35:33 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1629489' 00:35:09.162 killing process with pid 1629489 00:35:09.162 18:35:33 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1629489 00:35:09.162 18:35:33 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1629489 00:35:09.162 18:35:34 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:09.162 18:35:34 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:09.162 Waiting for block devices as requested 00:35:09.162 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:09.162 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:09.422 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:09.422 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:09.422 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:09.681 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:09.681 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:09.681 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:09.681 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:09.941 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:09.941 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:09.941 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:09.941 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:10.201 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:10.201 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:10.201 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:10.201 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:10.459 18:35:36 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:10.459 18:35:36 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:10.459 18:35:36 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:10.459 18:35:36 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:10.459 18:35:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.459 18:35:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:10.459 18:35:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.359 18:35:38 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:12.359 00:35:12.359 real 1m6.379s 00:35:12.359 user 6m28.521s 00:35:12.359 sys 0m19.488s 00:35:12.359 18:35:38 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:12.359 18:35:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:12.359 ************************************ 00:35:12.359 END TEST nvmf_dif 00:35:12.359 ************************************ 00:35:12.617 18:35:38 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:12.617 18:35:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:12.617 18:35:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:12.617 18:35:38 -- common/autotest_common.sh@10 -- # set +x 00:35:12.617 ************************************ 00:35:12.617 START TEST nvmf_abort_qd_sizes 00:35:12.617 ************************************ 00:35:12.617 18:35:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:12.617 * Looking for test storage... 00:35:12.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:12.617 18:35:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:12.617 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:12.617 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:12.617 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:12.617 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:12.617 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:12.617 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:12.617 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:12.618 18:35:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:14.519 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:14.519 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:14.519 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:14.519 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:14.520 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:14.520 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:14.520 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:14.520 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:14.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:14.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:35:14.520 00:35:14.520 --- 10.0.0.2 ping statistics --- 00:35:14.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:14.520 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:14.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:14.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:35:14.520 00:35:14.520 --- 10.0.0.1 ping statistics --- 00:35:14.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:14.520 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:14.520 18:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:15.452 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:15.710 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:15.710 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:15.710 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:15.710 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:15.710 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:15.710 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:15.710 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:15.710 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:15.710 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:15.710 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:15.710 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:15.710 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:15.710 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:15.710 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:15.710 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:16.647 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1640414 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1640414 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1640414 ']' 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:16.648 18:35:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:16.908 [2024-07-26 18:35:42.831761] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:16.908 [2024-07-26 18:35:42.831852] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:16.908 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.908 [2024-07-26 18:35:42.868873] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:16.908 [2024-07-26 18:35:42.901309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:16.908 [2024-07-26 18:35:42.990534] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:16.908 [2024-07-26 18:35:42.990586] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:16.908 [2024-07-26 18:35:42.990605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:16.908 [2024-07-26 18:35:42.990616] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:16.908 [2024-07-26 18:35:42.990625] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:16.908 [2024-07-26 18:35:42.990705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.908 [2024-07-26 18:35:42.990735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:16.908 [2024-07-26 18:35:42.990793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:16.908 [2024-07-26 18:35:42.990795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:17.168 18:35:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:17.168 ************************************ 00:35:17.168 START TEST spdk_target_abort 00:35:17.168 ************************************ 00:35:17.168 18:35:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:35:17.168 18:35:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:17.168 18:35:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:35:17.168 18:35:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.168 18:35:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:20.489 spdk_targetn1 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:20.489 [2024-07-26 18:35:46.020203] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:20.489 [2024-07-26 18:35:46.052446] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:20.489 18:35:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:20.489 EAL: No free 2048 kB hugepages reported on node 1 00:35:23.778 Initializing NVMe Controllers 00:35:23.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:23.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:23.778 Initialization complete. Launching workers. 00:35:23.778 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10817, failed: 0 00:35:23.778 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1257, failed to submit 9560 00:35:23.778 success 775, unsuccess 482, failed 0 00:35:23.778 18:35:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:23.779 18:35:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:23.779 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.065 Initializing NVMe Controllers 00:35:27.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:27.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:27.065 Initialization complete. Launching workers. 00:35:27.065 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8670, failed: 0 00:35:27.065 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1222, failed to submit 7448 00:35:27.065 success 348, unsuccess 874, failed 0 00:35:27.065 18:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:27.065 18:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:27.065 EAL: No free 2048 kB hugepages reported on node 1 00:35:29.595 Initializing NVMe Controllers 00:35:29.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:29.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:29.595 Initialization complete. Launching workers. 00:35:29.595 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31311, failed: 0 00:35:29.595 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2716, failed to submit 28595 00:35:29.595 success 563, unsuccess 2153, failed 0 00:35:29.595 18:35:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:29.595 18:35:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.595 18:35:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:29.595 18:35:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.596 18:35:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:29.596 18:35:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.596 18:35:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:30.972 18:35:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.972 18:35:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1640414 00:35:30.972 18:35:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1640414 ']' 00:35:30.972 18:35:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1640414 00:35:30.972 18:35:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:35:30.972 18:35:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:30.972 18:35:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1640414 00:35:30.972 18:35:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:30.972 18:35:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:30.972 18:35:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1640414' 00:35:30.972 killing process with pid 1640414 00:35:30.972 18:35:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1640414 00:35:30.972 18:35:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1640414 00:35:31.231 00:35:31.231 real 0m14.064s 00:35:31.231 user 0m53.043s 00:35:31.231 sys 0m2.749s 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:31.231 ************************************ 00:35:31.231 END TEST spdk_target_abort 00:35:31.231 ************************************ 00:35:31.231 18:35:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:31.231 18:35:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:31.231 18:35:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:31.231 18:35:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:31.231 ************************************ 00:35:31.231 START TEST kernel_target_abort 00:35:31.231 ************************************ 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:31.231 18:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:32.607 Waiting for block devices as requested 00:35:32.607 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:32.607 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:32.607 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:32.607 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:32.866 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:32.866 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:32.866 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:32.866 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:32.866 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:33.125 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:33.125 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:33.125 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:33.385 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:33.385 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:33.385 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:33.385 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:33.643 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:33.643 No valid GPT data, bailing 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:33.643 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:33.902 00:35:33.902 Discovery Log Number of Records 2, Generation counter 2 00:35:33.902 =====Discovery Log Entry 0====== 00:35:33.902 trtype: tcp 00:35:33.902 adrfam: ipv4 00:35:33.902 subtype: current discovery subsystem 00:35:33.902 treq: not specified, sq flow control disable supported 00:35:33.902 portid: 1 00:35:33.902 trsvcid: 4420 00:35:33.902 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:33.902 traddr: 10.0.0.1 00:35:33.902 eflags: none 00:35:33.902 sectype: none 00:35:33.902 =====Discovery Log Entry 1====== 00:35:33.902 trtype: tcp 00:35:33.902 adrfam: ipv4 00:35:33.902 subtype: nvme subsystem 00:35:33.902 treq: not specified, sq flow control disable supported 00:35:33.902 portid: 1 00:35:33.902 trsvcid: 4420 00:35:33.902 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:33.902 traddr: 10.0.0.1 00:35:33.902 eflags: none 00:35:33.902 sectype: none 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:33.902 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:33.903 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:33.903 18:35:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:33.903 EAL: No free 2048 kB hugepages reported on node 1 00:35:37.184 Initializing NVMe Controllers 00:35:37.184 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:37.184 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:37.184 Initialization complete. Launching workers. 00:35:37.184 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30054, failed: 0 00:35:37.184 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30054, failed to submit 0 00:35:37.184 success 0, unsuccess 30054, failed 0 00:35:37.184 18:36:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:37.185 18:36:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:37.185 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.469 Initializing NVMe Controllers 00:35:40.469 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:40.469 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:40.469 Initialization complete. Launching workers. 00:35:40.469 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 60268, failed: 0 00:35:40.469 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15182, failed to submit 45086 00:35:40.469 success 0, unsuccess 15182, failed 0 00:35:40.469 18:36:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:40.469 18:36:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:40.469 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.750 Initializing NVMe Controllers 00:35:43.750 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:43.750 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:43.750 Initialization complete. Launching workers. 00:35:43.750 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59293, failed: 0 00:35:43.750 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14790, failed to submit 44503 00:35:43.750 success 0, unsuccess 14790, failed 0 00:35:43.750 18:36:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:43.750 18:36:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:43.750 18:36:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:35:43.750 18:36:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:43.750 18:36:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:43.750 18:36:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:43.750 18:36:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:43.750 18:36:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:43.750 18:36:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:43.750 18:36:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:44.347 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:44.347 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:44.347 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:44.347 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:44.347 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:44.347 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:44.347 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:44.347 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:44.347 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:44.347 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:44.347 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:44.347 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:44.347 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:44.347 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:44.606 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:44.606 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:45.543 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:45.543 00:35:45.543 real 0m14.227s 00:35:45.543 user 0m4.877s 00:35:45.543 sys 0m3.348s 00:35:45.543 18:36:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:45.543 18:36:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.543 ************************************ 00:35:45.543 END TEST kernel_target_abort 00:35:45.543 ************************************ 00:35:45.543 18:36:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:45.543 18:36:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:45.543 18:36:11 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:45.543 18:36:11 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:35:45.543 18:36:11 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:45.543 18:36:11 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:35:45.543 18:36:11 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:45.544 18:36:11 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:45.544 rmmod nvme_tcp 00:35:45.544 rmmod nvme_fabrics 00:35:45.544 rmmod nvme_keyring 00:35:45.544 18:36:11 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:45.544 18:36:11 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:35:45.544 18:36:11 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:35:45.544 18:36:11 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1640414 ']' 00:35:45.544 18:36:11 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1640414 00:35:45.544 18:36:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1640414 ']' 00:35:45.544 18:36:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1640414 00:35:45.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1640414) - No such process 00:35:45.544 18:36:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1640414 is not found' 00:35:45.544 Process with pid 1640414 is not found 00:35:45.544 18:36:11 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:45.544 18:36:11 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:46.475 Waiting for block devices as requested 00:35:46.735 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:46.735 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:46.735 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:46.996 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:46.996 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:46.996 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:47.256 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:47.256 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:47.256 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:47.256 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:47.514 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:47.514 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:47.514 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:47.514 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:47.514 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:47.771 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:47.771 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:47.771 18:36:13 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:47.771 18:36:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:47.771 18:36:13 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:47.771 18:36:13 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:47.771 18:36:13 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.771 18:36:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:47.771 18:36:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.301 18:36:15 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:50.301 00:35:50.301 real 0m37.386s 00:35:50.301 user 0m59.934s 00:35:50.301 sys 0m9.269s 00:35:50.301 18:36:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:50.301 18:36:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:50.301 ************************************ 00:35:50.301 END TEST nvmf_abort_qd_sizes 00:35:50.301 ************************************ 00:35:50.301 18:36:15 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:50.301 18:36:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:50.301 18:36:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:50.301 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:35:50.301 ************************************ 00:35:50.301 START TEST keyring_file 00:35:50.302 ************************************ 00:35:50.302 18:36:15 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:50.302 * Looking for test storage... 00:35:50.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:50.302 18:36:16 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:50.302 18:36:16 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:50.302 18:36:16 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:50.302 18:36:16 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:50.302 18:36:16 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.302 18:36:16 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.302 18:36:16 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.302 18:36:16 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:50.302 18:36:16 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@47 -- # : 0 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:50.302 18:36:16 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:50.302 18:36:16 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:50.302 18:36:16 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:50.302 18:36:16 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:50.302 18:36:16 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:50.302 18:36:16 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qVuDL6M7tS 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qVuDL6M7tS 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qVuDL6M7tS 00:35:50.302 18:36:16 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.qVuDL6M7tS 00:35:50.302 18:36:16 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.djTHBRY4sY 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:50.302 18:36:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.djTHBRY4sY 00:35:50.302 18:36:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.djTHBRY4sY 00:35:50.302 18:36:16 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.djTHBRY4sY 00:35:50.302 18:36:16 keyring_file -- keyring/file.sh@30 -- # tgtpid=1646786 00:35:50.302 18:36:16 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:50.302 18:36:16 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1646786 00:35:50.302 18:36:16 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1646786 ']' 00:35:50.302 18:36:16 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:50.302 18:36:16 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:50.302 18:36:16 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:50.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:50.302 18:36:16 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:50.302 18:36:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:50.302 [2024-07-26 18:36:16.165441] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:50.302 [2024-07-26 18:36:16.165534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646786 ] 00:35:50.302 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.302 [2024-07-26 18:36:16.197399] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:50.302 [2024-07-26 18:36:16.224884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:50.302 [2024-07-26 18:36:16.308674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:50.560 18:36:16 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:50.560 18:36:16 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:50.560 18:36:16 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:50.560 18:36:16 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.560 18:36:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:50.560 [2024-07-26 18:36:16.548249] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:50.560 null0 00:35:50.560 [2024-07-26 18:36:16.580341] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:50.560 [2024-07-26 18:36:16.580676] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:50.560 [2024-07-26 18:36:16.588310] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:50.560 18:36:16 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.560 18:36:16 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:50.560 18:36:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:50.561 [2024-07-26 18:36:16.600350] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:50.561 request: 00:35:50.561 { 00:35:50.561 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.561 "secure_channel": false, 00:35:50.561 "listen_address": { 00:35:50.561 "trtype": "tcp", 00:35:50.561 "traddr": "127.0.0.1", 00:35:50.561 "trsvcid": "4420" 00:35:50.561 }, 00:35:50.561 "method": "nvmf_subsystem_add_listener", 00:35:50.561 "req_id": 1 00:35:50.561 } 00:35:50.561 Got JSON-RPC error response 00:35:50.561 response: 00:35:50.561 { 00:35:50.561 "code": -32602, 00:35:50.561 "message": "Invalid parameters" 00:35:50.561 } 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:50.561 18:36:16 keyring_file -- keyring/file.sh@46 -- # bperfpid=1646791 00:35:50.561 18:36:16 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1646791 /var/tmp/bperf.sock 00:35:50.561 18:36:16 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1646791 ']' 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:50.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:50.561 18:36:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:50.561 [2024-07-26 18:36:16.649125] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:50.561 [2024-07-26 18:36:16.649194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646791 ] 00:35:50.561 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.561 [2024-07-26 18:36:16.680486] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:50.818 [2024-07-26 18:36:16.710485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:50.818 [2024-07-26 18:36:16.800945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:50.818 18:36:16 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:50.818 18:36:16 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:50.818 18:36:16 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qVuDL6M7tS 00:35:50.818 18:36:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qVuDL6M7tS 00:35:51.076 18:36:17 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.djTHBRY4sY 00:35:51.076 18:36:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.djTHBRY4sY 00:35:51.334 18:36:17 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:35:51.334 18:36:17 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:35:51.334 18:36:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:51.334 18:36:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.334 18:36:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:51.592 18:36:17 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.qVuDL6M7tS == \/\t\m\p\/\t\m\p\.\q\V\u\D\L\6\M\7\t\S ]] 00:35:51.592 18:36:17 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:35:51.592 18:36:17 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:51.592 18:36:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:51.592 18:36:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.592 18:36:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:51.850 18:36:17 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.djTHBRY4sY == \/\t\m\p\/\t\m\p\.\d\j\T\H\B\R\Y\4\s\Y ]] 00:35:51.850 18:36:17 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:35:51.850 18:36:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:51.850 18:36:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:51.850 18:36:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:51.850 18:36:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.850 18:36:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:52.108 18:36:18 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:35:52.108 18:36:18 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:35:52.108 18:36:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:52.108 18:36:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:52.108 18:36:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.108 18:36:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.108 18:36:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:52.366 18:36:18 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:52.366 18:36:18 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:52.366 18:36:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:52.625 [2024-07-26 18:36:18.673516] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:52.625 nvme0n1 00:35:52.625 18:36:18 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:35:52.625 18:36:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:52.625 18:36:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:52.883 18:36:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.883 18:36:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.883 18:36:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:52.883 18:36:19 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:35:52.883 18:36:19 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:35:52.883 18:36:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:52.883 18:36:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:52.883 18:36:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.883 18:36:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.883 18:36:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:53.142 18:36:19 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:35:53.142 18:36:19 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:53.399 Running I/O for 1 seconds... 00:35:54.336 00:35:54.336 Latency(us) 00:35:54.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.336 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:54.336 nvme0n1 : 1.02 4375.80 17.09 0.00 0.00 28916.32 4174.89 33204.91 00:35:54.336 =================================================================================================================== 00:35:54.336 Total : 4375.80 17.09 0.00 0.00 28916.32 4174.89 33204.91 00:35:54.336 0 00:35:54.336 18:36:20 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:54.336 18:36:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:54.594 18:36:20 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:35:54.594 18:36:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:54.594 18:36:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:54.594 18:36:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:54.594 18:36:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.594 18:36:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:54.852 18:36:20 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:35:54.852 18:36:20 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:35:54.852 18:36:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:54.852 18:36:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:54.852 18:36:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:54.852 18:36:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.852 18:36:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:55.110 18:36:21 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:55.110 18:36:21 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:55.110 18:36:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:55.110 18:36:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:55.110 18:36:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:55.110 18:36:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:55.110 18:36:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:55.110 18:36:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:55.110 18:36:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:55.110 18:36:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:55.368 [2024-07-26 18:36:21.389265] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:55.368 [2024-07-26 18:36:21.390116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5a7b0 (107): Transport endpoint is not connected 00:35:55.368 [2024-07-26 18:36:21.391107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5a7b0 (9): Bad file descriptor 00:35:55.368 [2024-07-26 18:36:21.392091] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:55.368 [2024-07-26 18:36:21.392129] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:55.368 [2024-07-26 18:36:21.392142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:55.368 request: 00:35:55.368 { 00:35:55.368 "name": "nvme0", 00:35:55.368 "trtype": "tcp", 00:35:55.368 "traddr": "127.0.0.1", 00:35:55.368 "adrfam": "ipv4", 00:35:55.368 "trsvcid": "4420", 00:35:55.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:55.368 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:55.368 "prchk_reftag": false, 00:35:55.368 "prchk_guard": false, 00:35:55.368 "hdgst": false, 00:35:55.368 "ddgst": false, 00:35:55.368 "psk": "key1", 00:35:55.368 "method": "bdev_nvme_attach_controller", 00:35:55.368 "req_id": 1 00:35:55.368 } 00:35:55.368 Got JSON-RPC error response 00:35:55.368 response: 00:35:55.368 { 00:35:55.368 "code": -5, 00:35:55.368 "message": "Input/output error" 00:35:55.368 } 00:35:55.368 18:36:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:55.368 18:36:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:55.368 18:36:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:55.368 18:36:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:55.368 18:36:21 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:35:55.368 18:36:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:55.368 18:36:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.368 18:36:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.368 18:36:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.368 18:36:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:55.627 18:36:21 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:35:55.627 18:36:21 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:35:55.627 18:36:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:55.627 18:36:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.627 18:36:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.627 18:36:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:55.627 18:36:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.885 18:36:21 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:55.885 18:36:21 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:35:55.885 18:36:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:56.143 18:36:22 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:35:56.143 18:36:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:56.401 18:36:22 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:35:56.401 18:36:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:56.401 18:36:22 keyring_file -- keyring/file.sh@77 -- # jq length 00:35:56.659 18:36:22 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:35:56.659 18:36:22 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.qVuDL6M7tS 00:35:56.659 18:36:22 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.qVuDL6M7tS 00:35:56.659 18:36:22 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:56.659 18:36:22 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.qVuDL6M7tS 00:35:56.659 18:36:22 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:56.659 18:36:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:56.659 18:36:22 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:56.659 18:36:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:56.659 18:36:22 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qVuDL6M7tS 00:35:56.659 18:36:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qVuDL6M7tS 00:35:56.917 [2024-07-26 18:36:22.881011] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qVuDL6M7tS': 0100660 00:35:56.917 [2024-07-26 18:36:22.881083] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:56.917 request: 00:35:56.917 { 00:35:56.917 "name": "key0", 00:35:56.917 "path": "/tmp/tmp.qVuDL6M7tS", 00:35:56.917 "method": "keyring_file_add_key", 00:35:56.917 "req_id": 1 00:35:56.917 } 00:35:56.917 Got JSON-RPC error response 00:35:56.917 response: 00:35:56.917 { 00:35:56.917 "code": -1, 00:35:56.917 "message": "Operation not permitted" 00:35:56.917 } 00:35:56.917 18:36:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:56.917 18:36:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:56.917 18:36:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:56.917 18:36:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:56.917 18:36:22 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.qVuDL6M7tS 00:35:56.917 18:36:22 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qVuDL6M7tS 00:35:56.917 18:36:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qVuDL6M7tS 00:35:57.175 18:36:23 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.qVuDL6M7tS 00:35:57.175 18:36:23 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:35:57.175 18:36:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:57.175 18:36:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:57.175 18:36:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:57.175 18:36:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:57.175 18:36:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:57.432 18:36:23 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:35:57.432 18:36:23 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:57.432 18:36:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:57.432 18:36:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:57.432 18:36:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:57.432 18:36:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.432 18:36:23 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:57.432 18:36:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.432 18:36:23 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:57.432 18:36:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:57.689 [2024-07-26 18:36:23.619040] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.qVuDL6M7tS': No such file or directory 00:35:57.689 [2024-07-26 18:36:23.619110] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:57.689 [2024-07-26 18:36:23.619146] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:57.689 [2024-07-26 18:36:23.619156] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:57.689 [2024-07-26 18:36:23.619168] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:57.689 request: 00:35:57.689 { 00:35:57.689 "name": "nvme0", 00:35:57.689 "trtype": "tcp", 00:35:57.689 "traddr": "127.0.0.1", 00:35:57.689 "adrfam": "ipv4", 00:35:57.689 "trsvcid": "4420", 00:35:57.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:57.689 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:57.689 "prchk_reftag": false, 00:35:57.689 "prchk_guard": false, 00:35:57.689 "hdgst": false, 00:35:57.689 "ddgst": false, 00:35:57.689 "psk": "key0", 00:35:57.689 "method": "bdev_nvme_attach_controller", 00:35:57.689 "req_id": 1 00:35:57.689 } 00:35:57.689 Got JSON-RPC error response 00:35:57.689 response: 00:35:57.689 { 00:35:57.689 "code": -19, 00:35:57.689 "message": "No such device" 00:35:57.689 } 00:35:57.689 18:36:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:57.689 18:36:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:57.689 18:36:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:57.689 18:36:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:57.689 18:36:23 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:35:57.689 18:36:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:57.946 18:36:23 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:57.946 18:36:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:57.946 18:36:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:57.946 18:36:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:57.946 18:36:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:57.946 18:36:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:57.946 18:36:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HebdwP7Ha1 00:35:57.946 18:36:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:57.946 18:36:23 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:57.946 18:36:23 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:57.946 18:36:23 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:57.946 18:36:23 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:57.946 18:36:23 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:57.946 18:36:23 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:57.946 18:36:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HebdwP7Ha1 00:35:57.946 18:36:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HebdwP7Ha1 00:35:57.946 18:36:23 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.HebdwP7Ha1 00:35:57.946 18:36:23 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HebdwP7Ha1 00:35:57.946 18:36:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HebdwP7Ha1 00:35:58.203 18:36:24 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:58.203 18:36:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:58.461 nvme0n1 00:35:58.461 18:36:24 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:35:58.461 18:36:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:58.461 18:36:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:58.461 18:36:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.461 18:36:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.461 18:36:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:58.718 18:36:24 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:35:58.718 18:36:24 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:35:58.718 18:36:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:58.976 18:36:24 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:35:58.976 18:36:24 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:35:58.976 18:36:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.976 18:36:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.976 18:36:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:59.234 18:36:25 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:35:59.234 18:36:25 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:35:59.234 18:36:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:59.234 18:36:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:59.234 18:36:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:59.234 18:36:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.234 18:36:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:59.492 18:36:25 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:35:59.492 18:36:25 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:59.492 18:36:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:59.750 18:36:25 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:35:59.750 18:36:25 keyring_file -- keyring/file.sh@104 -- # jq length 00:35:59.750 18:36:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:00.008 18:36:25 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:00.008 18:36:25 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HebdwP7Ha1 00:36:00.008 18:36:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HebdwP7Ha1 00:36:00.274 18:36:26 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.djTHBRY4sY 00:36:00.274 18:36:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.djTHBRY4sY 00:36:00.575 18:36:26 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:00.575 18:36:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:00.833 nvme0n1 00:36:00.833 18:36:26 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:00.833 18:36:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:01.092 18:36:27 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:01.092 "subsystems": [ 00:36:01.092 { 00:36:01.092 "subsystem": "keyring", 00:36:01.092 "config": [ 00:36:01.092 { 00:36:01.092 "method": "keyring_file_add_key", 00:36:01.092 "params": { 00:36:01.092 "name": "key0", 00:36:01.092 "path": "/tmp/tmp.HebdwP7Ha1" 00:36:01.092 } 00:36:01.092 }, 00:36:01.092 { 00:36:01.092 "method": "keyring_file_add_key", 00:36:01.092 "params": { 00:36:01.092 "name": "key1", 00:36:01.092 "path": "/tmp/tmp.djTHBRY4sY" 00:36:01.092 } 00:36:01.092 } 00:36:01.092 ] 00:36:01.092 }, 00:36:01.092 { 00:36:01.092 "subsystem": "iobuf", 00:36:01.092 "config": [ 00:36:01.092 { 00:36:01.092 "method": "iobuf_set_options", 00:36:01.092 "params": { 00:36:01.092 "small_pool_count": 8192, 00:36:01.092 "large_pool_count": 1024, 00:36:01.092 "small_bufsize": 8192, 00:36:01.092 "large_bufsize": 135168 00:36:01.092 } 00:36:01.092 } 00:36:01.092 ] 00:36:01.092 }, 00:36:01.092 { 00:36:01.092 "subsystem": "sock", 00:36:01.092 "config": [ 00:36:01.092 { 00:36:01.092 "method": "sock_set_default_impl", 00:36:01.092 "params": { 00:36:01.092 "impl_name": "posix" 00:36:01.092 } 00:36:01.092 }, 00:36:01.092 { 00:36:01.092 "method": "sock_impl_set_options", 00:36:01.092 "params": { 00:36:01.092 "impl_name": "ssl", 00:36:01.092 "recv_buf_size": 4096, 00:36:01.092 "send_buf_size": 4096, 00:36:01.093 "enable_recv_pipe": true, 00:36:01.093 "enable_quickack": false, 00:36:01.093 "enable_placement_id": 0, 00:36:01.093 "enable_zerocopy_send_server": true, 00:36:01.093 "enable_zerocopy_send_client": false, 00:36:01.093 "zerocopy_threshold": 0, 00:36:01.093 "tls_version": 0, 00:36:01.093 "enable_ktls": false 00:36:01.093 } 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "method": "sock_impl_set_options", 00:36:01.093 "params": { 00:36:01.093 "impl_name": "posix", 00:36:01.093 "recv_buf_size": 2097152, 00:36:01.093 "send_buf_size": 2097152, 00:36:01.093 "enable_recv_pipe": true, 00:36:01.093 "enable_quickack": false, 00:36:01.093 "enable_placement_id": 0, 00:36:01.093 "enable_zerocopy_send_server": true, 00:36:01.093 "enable_zerocopy_send_client": false, 00:36:01.093 "zerocopy_threshold": 0, 00:36:01.093 "tls_version": 0, 00:36:01.093 "enable_ktls": false 00:36:01.093 } 00:36:01.093 } 00:36:01.093 ] 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "subsystem": "vmd", 00:36:01.093 "config": [] 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "subsystem": "accel", 00:36:01.093 "config": [ 00:36:01.093 { 00:36:01.093 "method": "accel_set_options", 00:36:01.093 "params": { 00:36:01.093 "small_cache_size": 128, 00:36:01.093 "large_cache_size": 16, 00:36:01.093 "task_count": 2048, 00:36:01.093 "sequence_count": 2048, 00:36:01.093 "buf_count": 2048 00:36:01.093 } 00:36:01.093 } 00:36:01.093 ] 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "subsystem": "bdev", 00:36:01.093 "config": [ 00:36:01.093 { 00:36:01.093 "method": "bdev_set_options", 00:36:01.093 "params": { 00:36:01.093 "bdev_io_pool_size": 65535, 00:36:01.093 "bdev_io_cache_size": 256, 00:36:01.093 "bdev_auto_examine": true, 00:36:01.093 "iobuf_small_cache_size": 128, 00:36:01.093 "iobuf_large_cache_size": 16 00:36:01.093 } 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "method": "bdev_raid_set_options", 00:36:01.093 "params": { 00:36:01.093 "process_window_size_kb": 1024, 00:36:01.093 "process_max_bandwidth_mb_sec": 0 00:36:01.093 } 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "method": "bdev_iscsi_set_options", 00:36:01.093 "params": { 00:36:01.093 "timeout_sec": 30 00:36:01.093 } 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "method": "bdev_nvme_set_options", 00:36:01.093 "params": { 00:36:01.093 "action_on_timeout": "none", 00:36:01.093 "timeout_us": 0, 00:36:01.093 "timeout_admin_us": 0, 00:36:01.093 "keep_alive_timeout_ms": 10000, 00:36:01.093 "arbitration_burst": 0, 00:36:01.093 "low_priority_weight": 0, 00:36:01.093 "medium_priority_weight": 0, 00:36:01.093 "high_priority_weight": 0, 00:36:01.093 "nvme_adminq_poll_period_us": 10000, 00:36:01.093 "nvme_ioq_poll_period_us": 0, 00:36:01.093 "io_queue_requests": 512, 00:36:01.093 "delay_cmd_submit": true, 00:36:01.093 "transport_retry_count": 4, 00:36:01.093 "bdev_retry_count": 3, 00:36:01.093 "transport_ack_timeout": 0, 00:36:01.093 "ctrlr_loss_timeout_sec": 0, 00:36:01.093 "reconnect_delay_sec": 0, 00:36:01.093 "fast_io_fail_timeout_sec": 0, 00:36:01.093 "disable_auto_failback": false, 00:36:01.093 "generate_uuids": false, 00:36:01.093 "transport_tos": 0, 00:36:01.093 "nvme_error_stat": false, 00:36:01.093 "rdma_srq_size": 0, 00:36:01.093 "io_path_stat": false, 00:36:01.093 "allow_accel_sequence": false, 00:36:01.093 "rdma_max_cq_size": 0, 00:36:01.093 "rdma_cm_event_timeout_ms": 0, 00:36:01.093 "dhchap_digests": [ 00:36:01.093 "sha256", 00:36:01.093 "sha384", 00:36:01.093 "sha512" 00:36:01.093 ], 00:36:01.093 "dhchap_dhgroups": [ 00:36:01.093 "null", 00:36:01.093 "ffdhe2048", 00:36:01.093 "ffdhe3072", 00:36:01.093 "ffdhe4096", 00:36:01.093 "ffdhe6144", 00:36:01.093 "ffdhe8192" 00:36:01.093 ] 00:36:01.093 } 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "method": "bdev_nvme_attach_controller", 00:36:01.093 "params": { 00:36:01.093 "name": "nvme0", 00:36:01.093 "trtype": "TCP", 00:36:01.093 "adrfam": "IPv4", 00:36:01.093 "traddr": "127.0.0.1", 00:36:01.093 "trsvcid": "4420", 00:36:01.093 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:01.093 "prchk_reftag": false, 00:36:01.093 "prchk_guard": false, 00:36:01.093 "ctrlr_loss_timeout_sec": 0, 00:36:01.093 "reconnect_delay_sec": 0, 00:36:01.093 "fast_io_fail_timeout_sec": 0, 00:36:01.093 "psk": "key0", 00:36:01.093 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:01.093 "hdgst": false, 00:36:01.093 "ddgst": false 00:36:01.093 } 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "method": "bdev_nvme_set_hotplug", 00:36:01.093 "params": { 00:36:01.093 "period_us": 100000, 00:36:01.093 "enable": false 00:36:01.093 } 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "method": "bdev_wait_for_examine" 00:36:01.093 } 00:36:01.093 ] 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "subsystem": "nbd", 00:36:01.093 "config": [] 00:36:01.093 } 00:36:01.093 ] 00:36:01.093 }' 00:36:01.093 18:36:27 keyring_file -- keyring/file.sh@114 -- # killprocess 1646791 00:36:01.093 18:36:27 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1646791 ']' 00:36:01.093 18:36:27 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1646791 00:36:01.093 18:36:27 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:01.093 18:36:27 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:01.093 18:36:27 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1646791 00:36:01.093 18:36:27 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:01.093 18:36:27 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:01.093 18:36:27 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1646791' 00:36:01.093 killing process with pid 1646791 00:36:01.093 18:36:27 keyring_file -- common/autotest_common.sh@969 -- # kill 1646791 00:36:01.093 Received shutdown signal, test time was about 1.000000 seconds 00:36:01.093 00:36:01.093 Latency(us) 00:36:01.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:01.093 =================================================================================================================== 00:36:01.093 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:01.093 18:36:27 keyring_file -- common/autotest_common.sh@974 -- # wait 1646791 00:36:01.352 18:36:27 keyring_file -- keyring/file.sh@117 -- # bperfpid=1648172 00:36:01.352 18:36:27 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1648172 /var/tmp/bperf.sock 00:36:01.352 18:36:27 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1648172 ']' 00:36:01.352 18:36:27 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:01.352 18:36:27 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:01.352 18:36:27 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:01.352 18:36:27 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:01.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:01.352 18:36:27 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:01.352 "subsystems": [ 00:36:01.352 { 00:36:01.352 "subsystem": "keyring", 00:36:01.352 "config": [ 00:36:01.353 { 00:36:01.353 "method": "keyring_file_add_key", 00:36:01.353 "params": { 00:36:01.353 "name": "key0", 00:36:01.353 "path": "/tmp/tmp.HebdwP7Ha1" 00:36:01.353 } 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "method": "keyring_file_add_key", 00:36:01.353 "params": { 00:36:01.353 "name": "key1", 00:36:01.353 "path": "/tmp/tmp.djTHBRY4sY" 00:36:01.353 } 00:36:01.353 } 00:36:01.353 ] 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "subsystem": "iobuf", 00:36:01.353 "config": [ 00:36:01.353 { 00:36:01.353 "method": "iobuf_set_options", 00:36:01.353 "params": { 00:36:01.353 "small_pool_count": 8192, 00:36:01.353 "large_pool_count": 1024, 00:36:01.353 "small_bufsize": 8192, 00:36:01.353 "large_bufsize": 135168 00:36:01.353 } 00:36:01.353 } 00:36:01.353 ] 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "subsystem": "sock", 00:36:01.353 "config": [ 00:36:01.353 { 00:36:01.353 "method": "sock_set_default_impl", 00:36:01.353 "params": { 00:36:01.353 "impl_name": "posix" 00:36:01.353 } 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "method": "sock_impl_set_options", 00:36:01.353 "params": { 00:36:01.353 "impl_name": "ssl", 00:36:01.353 "recv_buf_size": 4096, 00:36:01.353 "send_buf_size": 4096, 00:36:01.353 "enable_recv_pipe": true, 00:36:01.353 "enable_quickack": false, 00:36:01.353 "enable_placement_id": 0, 00:36:01.353 "enable_zerocopy_send_server": true, 00:36:01.353 "enable_zerocopy_send_client": false, 00:36:01.353 "zerocopy_threshold": 0, 00:36:01.353 "tls_version": 0, 00:36:01.353 "enable_ktls": false 00:36:01.353 } 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "method": "sock_impl_set_options", 00:36:01.353 "params": { 00:36:01.353 "impl_name": "posix", 00:36:01.353 "recv_buf_size": 2097152, 00:36:01.353 "send_buf_size": 2097152, 00:36:01.353 "enable_recv_pipe": true, 00:36:01.353 "enable_quickack": false, 00:36:01.353 "enable_placement_id": 0, 00:36:01.353 "enable_zerocopy_send_server": true, 00:36:01.353 "enable_zerocopy_send_client": false, 00:36:01.353 "zerocopy_threshold": 0, 00:36:01.353 "tls_version": 0, 00:36:01.353 "enable_ktls": false 00:36:01.353 } 00:36:01.353 } 00:36:01.353 ] 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "subsystem": "vmd", 00:36:01.353 "config": [] 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "subsystem": "accel", 00:36:01.353 "config": [ 00:36:01.353 { 00:36:01.353 "method": "accel_set_options", 00:36:01.353 "params": { 00:36:01.353 "small_cache_size": 128, 00:36:01.353 "large_cache_size": 16, 00:36:01.353 "task_count": 2048, 00:36:01.353 "sequence_count": 2048, 00:36:01.353 "buf_count": 2048 00:36:01.353 } 00:36:01.353 } 00:36:01.353 ] 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "subsystem": "bdev", 00:36:01.353 "config": [ 00:36:01.353 { 00:36:01.353 "method": "bdev_set_options", 00:36:01.353 "params": { 00:36:01.353 "bdev_io_pool_size": 65535, 00:36:01.353 "bdev_io_cache_size": 256, 00:36:01.353 "bdev_auto_examine": true, 00:36:01.353 "iobuf_small_cache_size": 128, 00:36:01.353 "iobuf_large_cache_size": 16 00:36:01.353 } 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "method": "bdev_raid_set_options", 00:36:01.353 "params": { 00:36:01.353 "process_window_size_kb": 1024, 00:36:01.353 "process_max_bandwidth_mb_sec": 0 00:36:01.353 } 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "method": "bdev_iscsi_set_options", 00:36:01.353 "params": { 00:36:01.353 "timeout_sec": 30 00:36:01.353 } 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "method": "bdev_nvme_set_options", 00:36:01.353 "params": { 00:36:01.353 "action_on_timeout": "none", 00:36:01.353 "timeout_us": 0, 00:36:01.353 "timeout_admin_us": 0, 00:36:01.353 "keep_alive_timeout_ms": 10000, 00:36:01.353 "arbitration_burst": 0, 00:36:01.353 "low_priority_weight": 0, 00:36:01.353 "medium_priority_weight": 0, 00:36:01.353 "high_priority_weight": 0, 00:36:01.353 "nvme_adminq_poll_period_us": 10000, 00:36:01.353 "nvme_ioq_poll_period_us": 0, 00:36:01.353 "io_queue_requests": 512, 00:36:01.353 "delay_cmd_submit": true, 00:36:01.353 "transport_retry_count": 4, 00:36:01.353 "bdev_retry_count": 3, 00:36:01.353 "transport_ack_timeout": 0, 00:36:01.353 "ctrlr_loss_timeout_sec": 0, 00:36:01.353 "reconnect_delay_sec": 0, 00:36:01.353 "fast_io_fail_timeout_sec": 0, 00:36:01.353 "disable_auto_failback": false, 00:36:01.353 "generate_uuids": false, 00:36:01.353 "transport_tos": 0, 00:36:01.353 "nvme_error_stat": false, 00:36:01.353 "rdma_srq_size": 0, 00:36:01.353 "io_path_stat": false, 00:36:01.353 "allow_accel_sequence": false, 00:36:01.353 "rdma_max_cq_size": 0, 00:36:01.353 "rdma_cm_event_timeout_ms": 0, 00:36:01.353 "dhchap_digests": [ 00:36:01.353 "sha256", 00:36:01.353 "sha384", 00:36:01.353 "sha512" 00:36:01.353 ], 00:36:01.353 "dhchap_dhgroups": [ 00:36:01.353 "null", 00:36:01.353 "ffdhe2048", 00:36:01.353 "ffdhe3072", 00:36:01.353 "ffdhe4096", 00:36:01.353 "ffdhe6144", 00:36:01.353 "ffdhe8192" 00:36:01.353 ] 00:36:01.353 } 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "method": "bdev_nvme_attach_controller", 00:36:01.353 "params": { 00:36:01.353 "name": "nvme0", 00:36:01.353 "trtype": "TCP", 00:36:01.353 "adrfam": "IPv4", 00:36:01.353 "traddr": "127.0.0.1", 00:36:01.353 "trsvcid": "4420", 00:36:01.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:01.353 "prchk_reftag": false, 00:36:01.353 "prchk_guard": false, 00:36:01.353 "ctrlr_loss_timeout_sec": 0, 00:36:01.353 "reconnect_delay_sec": 0, 00:36:01.353 "fast_io_fail_timeout_sec": 0, 00:36:01.353 "psk": "key0", 00:36:01.353 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:01.353 "hdgst": false, 00:36:01.353 "ddgst": false 00:36:01.353 } 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "method": "bdev_nvme_set_hotplug", 00:36:01.353 "params": { 00:36:01.353 "period_us": 100000, 00:36:01.353 "enable": false 00:36:01.353 } 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "method": "bdev_wait_for_examine" 00:36:01.353 } 00:36:01.353 ] 00:36:01.353 }, 00:36:01.353 { 00:36:01.353 "subsystem": "nbd", 00:36:01.353 "config": [] 00:36:01.353 } 00:36:01.353 ] 00:36:01.353 }' 00:36:01.353 18:36:27 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:01.353 18:36:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:01.353 [2024-07-26 18:36:27.423827] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:36:01.353 [2024-07-26 18:36:27.423918] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648172 ] 00:36:01.353 EAL: No free 2048 kB hugepages reported on node 1 00:36:01.353 [2024-07-26 18:36:27.457965] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:01.353 [2024-07-26 18:36:27.486364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.613 [2024-07-26 18:36:27.572797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.873 [2024-07-26 18:36:27.758103] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:02.438 18:36:28 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:02.438 18:36:28 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:02.438 18:36:28 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:02.438 18:36:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.438 18:36:28 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:02.696 18:36:28 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:02.696 18:36:28 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:02.696 18:36:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:02.696 18:36:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:02.696 18:36:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:02.696 18:36:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.696 18:36:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:02.954 18:36:28 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:02.954 18:36:28 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:02.954 18:36:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:02.954 18:36:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:02.954 18:36:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:02.954 18:36:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.954 18:36:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:03.211 18:36:29 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:03.211 18:36:29 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:03.211 18:36:29 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:03.211 18:36:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:03.469 18:36:29 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:03.469 18:36:29 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:03.469 18:36:29 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.HebdwP7Ha1 /tmp/tmp.djTHBRY4sY 00:36:03.469 18:36:29 keyring_file -- keyring/file.sh@20 -- # killprocess 1648172 00:36:03.469 18:36:29 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1648172 ']' 00:36:03.469 18:36:29 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1648172 00:36:03.469 18:36:29 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:03.469 18:36:29 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:03.469 18:36:29 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1648172 00:36:03.469 18:36:29 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:03.469 18:36:29 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:03.469 18:36:29 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1648172' 00:36:03.469 killing process with pid 1648172 00:36:03.469 18:36:29 keyring_file -- common/autotest_common.sh@969 -- # kill 1648172 00:36:03.469 Received shutdown signal, test time was about 1.000000 seconds 00:36:03.469 00:36:03.469 Latency(us) 00:36:03.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.469 =================================================================================================================== 00:36:03.469 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:03.469 18:36:29 keyring_file -- common/autotest_common.sh@974 -- # wait 1648172 00:36:03.727 18:36:29 keyring_file -- keyring/file.sh@21 -- # killprocess 1646786 00:36:03.727 18:36:29 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1646786 ']' 00:36:03.727 18:36:29 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1646786 00:36:03.727 18:36:29 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:03.727 18:36:29 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:03.727 18:36:29 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1646786 00:36:03.727 18:36:29 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:03.727 18:36:29 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:03.727 18:36:29 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1646786' 00:36:03.727 killing process with pid 1646786 00:36:03.727 18:36:29 keyring_file -- common/autotest_common.sh@969 -- # kill 1646786 00:36:03.727 [2024-07-26 18:36:29.674370] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:03.727 18:36:29 keyring_file -- common/autotest_common.sh@974 -- # wait 1646786 00:36:03.987 00:36:03.987 real 0m14.094s 00:36:03.987 user 0m34.931s 00:36:03.987 sys 0m3.226s 00:36:03.987 18:36:30 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:03.987 18:36:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:03.987 ************************************ 00:36:03.987 END TEST keyring_file 00:36:03.987 ************************************ 00:36:03.987 18:36:30 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:36:03.987 18:36:30 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:03.987 18:36:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:03.987 18:36:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:03.987 18:36:30 -- common/autotest_common.sh@10 -- # set +x 00:36:03.987 ************************************ 00:36:03.987 START TEST keyring_linux 00:36:03.987 ************************************ 00:36:03.987 18:36:30 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:04.246 * Looking for test storage... 00:36:04.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:04.246 18:36:30 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:04.246 18:36:30 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.246 18:36:30 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.246 18:36:30 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.246 18:36:30 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.246 18:36:30 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.246 18:36:30 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.246 18:36:30 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:04.246 18:36:30 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:04.246 18:36:30 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:04.246 18:36:30 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:04.246 18:36:30 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:04.246 18:36:30 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:04.246 18:36:30 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:04.246 18:36:30 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:04.246 /tmp/:spdk-test:key0 00:36:04.246 18:36:30 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:04.246 18:36:30 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:04.246 18:36:30 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:04.246 /tmp/:spdk-test:key1 00:36:04.246 18:36:30 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1648611 00:36:04.246 18:36:30 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:04.246 18:36:30 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1648611 00:36:04.246 18:36:30 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1648611 ']' 00:36:04.246 18:36:30 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.246 18:36:30 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:04.246 18:36:30 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:04.246 18:36:30 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:04.246 18:36:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:04.246 [2024-07-26 18:36:30.293859] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:36:04.246 [2024-07-26 18:36:30.293946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648611 ] 00:36:04.246 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.246 [2024-07-26 18:36:30.325244] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:04.246 [2024-07-26 18:36:30.350878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:04.505 [2024-07-26 18:36:30.440776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.763 18:36:30 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:04.763 18:36:30 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:04.763 18:36:30 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:04.763 18:36:30 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.763 18:36:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:04.763 [2024-07-26 18:36:30.699301] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:04.763 null0 00:36:04.763 [2024-07-26 18:36:30.731389] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:04.763 [2024-07-26 18:36:30.731903] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:04.763 18:36:30 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.763 18:36:30 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:04.763 220848210 00:36:04.763 18:36:30 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:04.763 358418699 00:36:04.763 18:36:30 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1648625 00:36:04.763 18:36:30 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:04.764 18:36:30 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1648625 /var/tmp/bperf.sock 00:36:04.764 18:36:30 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1648625 ']' 00:36:04.764 18:36:30 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:04.764 18:36:30 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:04.764 18:36:30 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:04.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:04.764 18:36:30 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:04.764 18:36:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:04.764 [2024-07-26 18:36:30.799007] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:36:04.764 [2024-07-26 18:36:30.799108] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648625 ] 00:36:04.764 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.764 [2024-07-26 18:36:30.831439] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:04.764 [2024-07-26 18:36:30.861983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.021 [2024-07-26 18:36:30.955172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.021 18:36:31 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:05.021 18:36:31 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:05.021 18:36:31 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:05.021 18:36:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:05.278 18:36:31 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:05.278 18:36:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:05.536 18:36:31 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:05.536 18:36:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:05.794 [2024-07-26 18:36:31.819461] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:05.794 nvme0n1 00:36:05.794 18:36:31 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:05.794 18:36:31 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:05.794 18:36:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:05.794 18:36:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:05.794 18:36:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:05.794 18:36:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.052 18:36:32 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:06.052 18:36:32 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:06.052 18:36:32 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:06.052 18:36:32 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:06.052 18:36:32 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.052 18:36:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.052 18:36:32 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:06.310 18:36:32 keyring_linux -- keyring/linux.sh@25 -- # sn=220848210 00:36:06.310 18:36:32 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:06.310 18:36:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:06.310 18:36:32 keyring_linux -- keyring/linux.sh@26 -- # [[ 220848210 == \2\2\0\8\4\8\2\1\0 ]] 00:36:06.310 18:36:32 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 220848210 00:36:06.310 18:36:32 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:06.310 18:36:32 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:06.568 Running I/O for 1 seconds... 00:36:07.506 00:36:07.506 Latency(us) 00:36:07.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:07.506 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:07.506 nvme0n1 : 1.02 3924.21 15.33 0.00 0.00 32274.00 10000.31 44273.21 00:36:07.506 =================================================================================================================== 00:36:07.506 Total : 3924.21 15.33 0.00 0.00 32274.00 10000.31 44273.21 00:36:07.506 0 00:36:07.506 18:36:33 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:07.506 18:36:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:07.764 18:36:33 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:07.764 18:36:33 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:07.764 18:36:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:07.764 18:36:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:07.764 18:36:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:07.764 18:36:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:08.022 18:36:34 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:08.022 18:36:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:08.022 18:36:34 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:08.022 18:36:34 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:08.022 18:36:34 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:36:08.022 18:36:34 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:08.022 18:36:34 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:08.022 18:36:34 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.022 18:36:34 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:08.022 18:36:34 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.022 18:36:34 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:08.022 18:36:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:08.280 [2024-07-26 18:36:34.293422] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:08.280 [2024-07-26 18:36:34.294220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e47a00 (107): Transport endpoint is not connected 00:36:08.280 [2024-07-26 18:36:34.295213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e47a00 (9): Bad file descriptor 00:36:08.280 [2024-07-26 18:36:34.296211] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:08.280 [2024-07-26 18:36:34.296239] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:08.280 [2024-07-26 18:36:34.296253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:08.280 request: 00:36:08.280 { 00:36:08.280 "name": "nvme0", 00:36:08.280 "trtype": "tcp", 00:36:08.280 "traddr": "127.0.0.1", 00:36:08.280 "adrfam": "ipv4", 00:36:08.280 "trsvcid": "4420", 00:36:08.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:08.280 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:08.280 "prchk_reftag": false, 00:36:08.280 "prchk_guard": false, 00:36:08.280 "hdgst": false, 00:36:08.280 "ddgst": false, 00:36:08.280 "psk": ":spdk-test:key1", 00:36:08.280 "method": "bdev_nvme_attach_controller", 00:36:08.280 "req_id": 1 00:36:08.280 } 00:36:08.280 Got JSON-RPC error response 00:36:08.280 response: 00:36:08.280 { 00:36:08.280 "code": -5, 00:36:08.280 "message": "Input/output error" 00:36:08.280 } 00:36:08.280 18:36:34 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:36:08.280 18:36:34 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:08.280 18:36:34 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:08.280 18:36:34 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:08.280 18:36:34 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:08.280 18:36:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:08.280 18:36:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:08.280 18:36:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:08.280 18:36:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:08.280 18:36:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:08.281 18:36:34 keyring_linux -- keyring/linux.sh@33 -- # sn=220848210 00:36:08.281 18:36:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 220848210 00:36:08.281 1 links removed 00:36:08.281 18:36:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:08.281 18:36:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:08.281 18:36:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:08.281 18:36:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:08.281 18:36:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:08.281 18:36:34 keyring_linux -- keyring/linux.sh@33 -- # sn=358418699 00:36:08.281 18:36:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 358418699 00:36:08.281 1 links removed 00:36:08.281 18:36:34 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1648625 00:36:08.281 18:36:34 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1648625 ']' 00:36:08.281 18:36:34 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1648625 00:36:08.281 18:36:34 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:08.281 18:36:34 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:08.281 18:36:34 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1648625 00:36:08.281 18:36:34 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:08.281 18:36:34 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:08.281 18:36:34 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1648625' 00:36:08.281 killing process with pid 1648625 00:36:08.281 18:36:34 keyring_linux -- common/autotest_common.sh@969 -- # kill 1648625 00:36:08.281 Received shutdown signal, test time was about 1.000000 seconds 00:36:08.281 00:36:08.281 Latency(us) 00:36:08.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.281 =================================================================================================================== 00:36:08.281 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:08.281 18:36:34 keyring_linux -- common/autotest_common.sh@974 -- # wait 1648625 00:36:08.540 18:36:34 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1648611 00:36:08.540 18:36:34 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1648611 ']' 00:36:08.540 18:36:34 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1648611 00:36:08.540 18:36:34 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:08.540 18:36:34 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:08.540 18:36:34 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1648611 00:36:08.540 18:36:34 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:08.540 18:36:34 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:08.540 18:36:34 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1648611' 00:36:08.540 killing process with pid 1648611 00:36:08.540 18:36:34 keyring_linux -- common/autotest_common.sh@969 -- # kill 1648611 00:36:08.540 18:36:34 keyring_linux -- common/autotest_common.sh@974 -- # wait 1648611 00:36:09.109 00:36:09.109 real 0m4.876s 00:36:09.109 user 0m9.102s 00:36:09.109 sys 0m1.513s 00:36:09.109 18:36:34 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:09.109 18:36:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:09.109 ************************************ 00:36:09.109 END TEST keyring_linux 00:36:09.109 ************************************ 00:36:09.109 18:36:35 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:09.109 18:36:35 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:09.109 18:36:35 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:36:09.109 18:36:35 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:36:09.109 18:36:35 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:36:09.109 18:36:35 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:09.109 18:36:35 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:09.109 18:36:35 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:09.109 18:36:35 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:09.109 18:36:35 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:09.109 18:36:35 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:36:09.109 18:36:35 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:09.109 18:36:35 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:09.109 18:36:35 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:09.109 18:36:35 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:36:09.109 18:36:35 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:36:09.109 18:36:35 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:36:09.109 18:36:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:09.109 18:36:35 -- common/autotest_common.sh@10 -- # set +x 00:36:09.109 18:36:35 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:36:09.109 18:36:35 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:09.109 18:36:35 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:09.109 18:36:35 -- common/autotest_common.sh@10 -- # set +x 00:36:11.015 INFO: APP EXITING 00:36:11.015 INFO: killing all VMs 00:36:11.015 INFO: killing vhost app 00:36:11.015 INFO: EXIT DONE 00:36:11.952 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:36:11.952 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:11.952 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:11.952 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:11.952 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:11.952 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:11.952 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:11.952 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:11.952 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:11.952 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:11.952 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:11.952 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:11.952 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:11.952 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:11.952 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:11.952 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:11.952 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:13.327 Cleaning 00:36:13.327 Removing: /var/run/dpdk/spdk0/config 00:36:13.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:13.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:13.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:13.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:13.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:13.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:13.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:13.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:13.327 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:13.327 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:13.327 Removing: /var/run/dpdk/spdk1/config 00:36:13.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:13.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:13.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:13.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:13.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:13.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:13.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:13.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:13.327 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:13.327 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:13.327 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:13.327 Removing: /var/run/dpdk/spdk2/config 00:36:13.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:13.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:13.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:13.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:13.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:13.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:13.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:13.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:13.327 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:13.327 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:13.327 Removing: /var/run/dpdk/spdk3/config 00:36:13.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:13.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:13.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:13.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:13.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:13.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:13.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:13.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:13.327 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:13.327 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:13.327 Removing: /var/run/dpdk/spdk4/config 00:36:13.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:13.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:13.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:13.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:13.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:13.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:13.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:13.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:13.327 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:13.327 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:13.327 Removing: /dev/shm/bdev_svc_trace.1 00:36:13.327 Removing: /dev/shm/nvmf_trace.0 00:36:13.327 Removing: /dev/shm/spdk_tgt_trace.pid1332238 00:36:13.327 Removing: /var/run/dpdk/spdk0 00:36:13.327 Removing: /var/run/dpdk/spdk1 00:36:13.327 Removing: /var/run/dpdk/spdk2 00:36:13.327 Removing: /var/run/dpdk/spdk3 00:36:13.327 Removing: /var/run/dpdk/spdk4 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1330687 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1331417 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1332238 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1332672 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1333360 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1333500 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1334218 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1334228 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1334467 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1335781 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1336711 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1337041 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1337290 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1337517 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1337707 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1337862 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1338022 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1338483 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1339278 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1341630 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1341793 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1341962 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1341970 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1342393 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1342403 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1342710 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1342833 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1343008 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1343128 00:36:13.327 Removing: /var/run/dpdk/spdk_pid1343296 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1343313 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1343674 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1343836 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1344142 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1346170 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1348718 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1355719 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1356132 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1358638 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1358801 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1361313 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1365017 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1367079 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1374084 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1379291 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1380491 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1381181 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1391389 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1393668 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1447333 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1450502 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1454324 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1458150 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1458160 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1458817 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1459470 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1460018 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1460543 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1460553 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1460720 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1460821 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1460827 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1461480 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1462135 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1462676 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1463072 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1463189 00:36:13.328 Removing: /var/run/dpdk/spdk_pid1463336 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1464221 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1465042 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1470864 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1496123 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1498905 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1500086 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1501283 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1501420 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1501552 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1501692 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1502019 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1503337 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1504055 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1504369 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1505982 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1506406 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1506846 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1509349 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1512596 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1516074 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1539341 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1542114 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1545863 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1547314 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1548412 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1550985 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1553218 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1557489 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1557551 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1560314 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1560455 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1560593 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1560861 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1560871 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1561939 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1563231 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1564413 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1565588 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1566769 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1567946 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1571635 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1572078 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1573357 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1574094 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1577903 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1580267 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1583675 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1586991 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1593201 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1597650 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1597671 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1609875 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1610281 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1610688 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1611170 00:36:13.586 Removing: /var/run/dpdk/spdk_pid1611791 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1612318 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1613230 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1613635 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1616126 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1616274 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1620056 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1620192 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1621825 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1626737 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1626747 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1629627 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1631015 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1632428 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1633167 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1634586 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1635449 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1640775 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1641102 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1641493 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1643050 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1643446 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1643843 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1646786 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1646791 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1648172 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1648611 00:36:13.587 Removing: /var/run/dpdk/spdk_pid1648625 00:36:13.587 Clean 00:36:13.845 18:36:39 -- common/autotest_common.sh@1451 -- # return 0 00:36:13.845 18:36:39 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:36:13.845 18:36:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:13.845 18:36:39 -- common/autotest_common.sh@10 -- # set +x 00:36:13.845 18:36:39 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:36:13.845 18:36:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:13.845 18:36:39 -- common/autotest_common.sh@10 -- # set +x 00:36:13.845 18:36:39 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:13.845 18:36:39 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:13.845 18:36:39 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:13.845 18:36:39 -- spdk/autotest.sh@395 -- # hash lcov 00:36:13.845 18:36:39 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:13.845 18:36:39 -- spdk/autotest.sh@397 -- # hostname 00:36:13.845 18:36:39 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:14.104 geninfo: WARNING: invalid characters removed from testname! 00:36:46.211 18:37:07 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:46.211 18:37:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:48.108 18:37:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:51.386 18:37:16 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:53.933 18:37:19 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:57.213 18:37:22 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:59.743 18:37:25 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:59.743 18:37:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:59.743 18:37:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:59.743 18:37:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.743 18:37:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.743 18:37:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.743 18:37:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.743 18:37:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.743 18:37:25 -- paths/export.sh@5 -- $ export PATH 00:36:59.743 18:37:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.743 18:37:25 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:59.743 18:37:25 -- common/autobuild_common.sh@447 -- $ date +%s 00:36:59.743 18:37:25 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1722011845.XXXXXX 00:36:59.743 18:37:25 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1722011845.XvJpXV 00:36:59.743 18:37:25 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:36:59.743 18:37:25 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:36:59.743 18:37:25 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:36:59.743 18:37:25 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:36:59.743 18:37:25 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:59.743 18:37:25 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:59.743 18:37:25 -- common/autobuild_common.sh@463 -- $ get_config_params 00:36:59.743 18:37:25 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:36:59.743 18:37:25 -- common/autotest_common.sh@10 -- $ set +x 00:36:59.743 18:37:25 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:36:59.743 18:37:25 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:36:59.743 18:37:25 -- pm/common@17 -- $ local monitor 00:36:59.743 18:37:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:59.743 18:37:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:59.743 18:37:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:59.743 18:37:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:59.743 18:37:25 -- pm/common@21 -- $ date +%s 00:36:59.743 18:37:25 -- pm/common@25 -- $ sleep 1 00:36:59.743 18:37:25 -- pm/common@21 -- $ date +%s 00:36:59.743 18:37:25 -- pm/common@21 -- $ date +%s 00:36:59.743 18:37:25 -- pm/common@21 -- $ date +%s 00:36:59.743 18:37:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722011845 00:36:59.743 18:37:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722011845 00:36:59.743 18:37:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722011845 00:36:59.743 18:37:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722011845 00:36:59.743 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722011845_collect-vmstat.pm.log 00:36:59.743 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722011845_collect-cpu-temp.pm.log 00:36:59.743 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722011845_collect-cpu-load.pm.log 00:36:59.743 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722011845_collect-bmc-pm.bmc.pm.log 00:37:00.683 18:37:26 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:37:00.684 18:37:26 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:00.684 18:37:26 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:00.684 18:37:26 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:00.684 18:37:26 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:00.684 18:37:26 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:00.684 18:37:26 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:00.684 18:37:26 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:00.684 18:37:26 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:00.684 18:37:26 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:00.684 18:37:26 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:00.684 18:37:26 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:00.684 18:37:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:00.684 18:37:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:00.684 18:37:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:00.684 18:37:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:00.684 18:37:26 -- pm/common@44 -- $ pid=1659750 00:37:00.684 18:37:26 -- pm/common@50 -- $ kill -TERM 1659750 00:37:00.684 18:37:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:00.684 18:37:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:00.684 18:37:26 -- pm/common@44 -- $ pid=1659752 00:37:00.684 18:37:26 -- pm/common@50 -- $ kill -TERM 1659752 00:37:00.684 18:37:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:00.684 18:37:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:00.684 18:37:26 -- pm/common@44 -- $ pid=1659754 00:37:00.684 18:37:26 -- pm/common@50 -- $ kill -TERM 1659754 00:37:00.684 18:37:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:00.684 18:37:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:00.684 18:37:26 -- pm/common@44 -- $ pid=1659778 00:37:00.684 18:37:26 -- pm/common@50 -- $ sudo -E kill -TERM 1659778 00:37:00.684 + [[ -n 1231130 ]] 00:37:00.684 + sudo kill 1231130 00:37:00.695 [Pipeline] } 00:37:00.714 [Pipeline] // stage 00:37:00.720 [Pipeline] } 00:37:00.739 [Pipeline] // timeout 00:37:00.745 [Pipeline] } 00:37:00.763 [Pipeline] // catchError 00:37:00.769 [Pipeline] } 00:37:00.789 [Pipeline] // wrap 00:37:00.796 [Pipeline] } 00:37:00.816 [Pipeline] // catchError 00:37:00.826 [Pipeline] stage 00:37:00.828 [Pipeline] { (Epilogue) 00:37:00.844 [Pipeline] catchError 00:37:00.846 [Pipeline] { 00:37:00.861 [Pipeline] echo 00:37:00.863 Cleanup processes 00:37:00.869 [Pipeline] sh 00:37:01.157 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:01.157 1659885 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:01.157 1660016 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:01.173 [Pipeline] sh 00:37:01.458 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:01.458 ++ grep -v 'sudo pgrep' 00:37:01.458 ++ awk '{print $1}' 00:37:01.458 + sudo kill -9 1659885 00:37:01.471 [Pipeline] sh 00:37:01.801 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:11.803 [Pipeline] sh 00:37:12.089 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:12.089 Artifacts sizes are good 00:37:12.105 [Pipeline] archiveArtifacts 00:37:12.112 Archiving artifacts 00:37:12.355 [Pipeline] sh 00:37:12.639 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:12.655 [Pipeline] cleanWs 00:37:12.665 [WS-CLEANUP] Deleting project workspace... 00:37:12.665 [WS-CLEANUP] Deferred wipeout is used... 00:37:12.672 [WS-CLEANUP] done 00:37:12.674 [Pipeline] } 00:37:12.696 [Pipeline] // catchError 00:37:12.709 [Pipeline] sh 00:37:12.991 + logger -p user.info -t JENKINS-CI 00:37:13.000 [Pipeline] } 00:37:13.017 [Pipeline] // stage 00:37:13.022 [Pipeline] } 00:37:13.039 [Pipeline] // node 00:37:13.044 [Pipeline] End of Pipeline 00:37:13.087 Finished: SUCCESS